MCSA: Data Engineering with Azure

Description

During this five-day course, the student will develop the skills to design and implement big data engineering workflows with the Microsoft Cloud Ecosystem and Microsoft HD Insight to extract the greatest amount of value from Data.

The MCSA: Data Engineering with Azure Certification will give validation to the skills learned in implementing big data engineering workflows with Microsoft Cloud services and Microsoft HD Insight.

This course is ideal for:

  • Data Engineer
  • Data Architect
  • Data Scientist
  • Data Developer

After completing this course, students will be able to:

  • To describe the purpose of Azure Data Factory, and explain how it works
  • To describe how to create Azure Data Factory pipelines that can transfer data efficiently
  • To describe how to perform transformations using an Azure Data Factory pipeline
  • To describe how to monitor Azure Data Factory pipeline, and how to protect the data flowing through these pipelines

20755: Perform data engineering on Microsoft HD Insight

  • Deploy HDInsight Clusters
  • Authorizing Users to Access Resources
  • Loading Data into HDInsight
  • Troubleshooting HDInsight
  • Implement Batch Solutions
  • Design Batch ETL Solutions for Big Data with Spark
  • Analyze Data with Hive and Phoenix
  • Describe Stream Analytics
  • Implement Spark Streaming Using the DStream API 
  • Develop Big Data Real-Time Processing Solutions with Apache Storm
  • Build Solutions that use Kafka and HBase

 

Perform Big Data Engineering on Microsoft Cloud Services (beta)

  • Describe common architectures for processing Big Data using Azure tools and services
  • Use Azure Stream Analytics to design and implement stream processing over large-scale data
  • How to include custom functions and incorporate machine learning activities into an Azure Stream Analytics job
  • How to use Azure Data Lake Store as a large-scale repository of data files
  • How to use Azure Data Lake Analytics to examine and process data held in Azure Data Lake Store
  • How to create and deploy custom functions and operations, integrate with Python and R, and protect and optimize jobs
  • How to use Azure SQL Data Warehouse to create a repository that can support large-scale analytical processing over data at rest
  • How to use Azure SQL Data Warehouse to perform analytical processing, how to maintain performance, and how to protect the data
  • How to use Azure Data Factory to import, transform, and transfer data between repositories and services
  • The purpose of Azure Data factory, and explain how it works
  • How to create Azure Data Factory pipelines that can transfer data efficiently
  • How to perform transformations using an Azure Data Factory pipeline
  • How to monitor Azure Data Factory pipelines and how to protect the data flowing through these pipelines

Prerequisites

It is recommended that students interested in this course have previous knowledge or experience with:

  • Azure Data Services
  • Microsoft Windows Operating system and its core functionality
  • Relational databases
  • Programming using R, and familiarity with common R packages
  • Common statistical methods and data analysis best practices
Contact Us

THE ACADEMY

1.800.482.3172

FTL: 954.351.7040

MIA: 305.648.2000


Request More Information

 

Current Promotions!

 

  _____________________________________

 

 

 

Email Newsletter icon, E-mail Newsletter icon, Email List icon, E-mail List icon Sign up for our Email Newsletter!

          

 

Students - Orbund Log-In

 

 

 

 

 

Top
  • Follow us on
  • Facebook Academy Page
  • Twitter Academy Page