70-775 Perform Data Engineering on Microsoft Azure HDInsight

Published: February 22, 2017
Languages: English
Audiences: Data scientists
Technology: Azure HDInsight
Credit toward certification: MCSE

Skills measured
This exam measures your ability to accomplish the technical tasks listed below. View video tutorials about the variety of question types on Microsoft exams.

Please note that the questions may test on, but will not be limited to, the topics described in the bulleted text.

Do you have feedback about the relevance of the skills measured on this exam? Please send Microsoft your comments. All feedback will be reviewed and incorporated as appropriate while still maintaining the validity and reliability of the certification process. Note that Microsoft will not respond directly to your feedback. We appreciate your input in ensuring the quality of the Microsoft Certification program.

If you have concerns about specific questions on this exam, please submit an exam challenge.

If you have other questions or feedback about Microsoft Certification exams or about the certification program, registration, or promotions, please contact your Regional Service Center.

Administer and Provision HDInsight Clusters
Deploy HDInsight clusters
Create a cluster in a private virtual network, create a cluster that has a custom metastore, create a domain-joined cluster, select an appropriate cluster type based on workload considerations, customize a cluster by using script actions, provision a cluster by using Portal, provision a cluster by using Azure CLI tools, provision a cluster by using Azure Resource Manager (ARM) templates and PowerShell, manage managed disks, configure vNet peering
Deploy and secure multi-user HDInsight clusters
Provision users who have different roles; manage users, groups, and permissions through Apache Ambari, PowerShell, and Apache Ranger; configure Kerberos; configure service accounts; implement SSH tunneling; restrict access to data
Ingest data for batch and interactive processing
Ingest data from cloud or on-premises data; store data in Azure Data Lake; store data in Azure Blob Storage; perform routine small writes on a continuous basis using Azure CLI tools; ingest data in Apache Hive and Apache Spark by using Apache Sqoop, Application Development Framework (ADF), AzCopy, and AdlCopy; ingest data from an on-premises Hadoop cluster
Configure HDInsight clusters
Manage metastore upgrades; view and edit Ambari configuration groups; view and change service configurations through Ambari; access logs written to Azure Table storage; enable heap dumps for Hadoop services; manage HDInsight configuration, use HDInsight .NET SDK, and PowerShell; perform cluster-level debugging; stop and start services through Ambari; manage Ambari alerts and metrics
Manage and debug HDInsight jobs
Describe YARN architecture and operation; examine YARN jobs through ResourceManager UI and review running applications; use YARN CLI to kill jobs; find logs for different types of jobs; debug Hadoop and Spark jobs; use Azure Operations Management Suite (OMS) to monitor and manage alerts, and perform predictive actions

Implement Big Data Batch Processing Solutions
Implement batch solutions with Hive and Apache Pig
Define external Hive tables; load data into a Hive table; use partitioning and bucketing to improve Hive performance; use semi-structured files such as XML and JSON with Hive; join tables with Hive using shuffle joins and broadcast joins; invoke Hive UDFs with Java and Python; design scripts with Pig; identify query bottlenecks using the Hive query graph; identify the appropriate storage format, such as Apache Parquet, ORC, Text, and JSON
Design batch ETL solutions for big data with Spark
Share resources between Spark applications using YARN queues and preemption, select Spark executor and driver settings for optimal performance, use partitioning and bucketing to improve Spark performance, connect to external Spark data sources, incorporate custom Python and Scala code in a Spark DataSets program, identify query bottlenecks using the Spark SQL query graph
Operationalize Hadoop and Spark
Create and customize a cluster by using ADF; attach storage to a cluster and run an ADF activity; choose between bring-your-own and on-demand clusters; use Apache Oozie with HDInsight; choose between Oozie and ADF; share metastore and storage accounts between a Hive cluster and a Spark cluster to enable the same table across the cluster types; select an appropriate storage type for a data pipeline, such as Blob storage, Azure Data Lake, and local Hadoop Distributed File System (HDFS)

Implement Big Data Interactive Processing Solutions
Implement interactive queries for big data with Spark SQL
Execute queries using Spark SQL, cache Spark DataFrames for iterative queries, save Spark DataFrames as Parquet files, connect BI tools to Spark clusters, optimize join types such as broadcast versus merge joins, manage Spark Thrift server and change the YARN resources allocation, identify use cases for different storage types for interactive queries
Perform exploratory data analysis by using Spark SQL
Use Jupyter and Apache Zeppelin for visualization and developing tidy Spark DataFrames for modeling, use Spark SQL’s two-table joins to merge DataFrames and cache results, save tidied Spark DataFrames to performant format for reading and analysis (Apache Parquet), manage interactive Livy sessions and their resources
Implement interactive queries for big data with Interactive Hive
Enable Hive LLAP through Hive settings, manage and configure memory allocation for Hive LLAP jobs, connect BI tools to Interactive Hive clusters
Perform exploratory data analysis by using Hive
Perform interactive querying and visualization, use Ambari Views, use HiveQL, parse CSV files with Hive, use ORC versus Text for caching, use internal and external tables in Hive, use Zeppelin to visualize data
Perform interactive processing by using Apache Phoenix on HBase
Use Phoenix in HDInsight; use Phoenix Grammar for queries; configure transactions, user-defined functions, and secondary indexes; identify and optimize Phoenix performance; select between Hive, Spark, and Phoenix on HBase for interactive processing; identify when to share metastore between a Hive cluster and a Spark cluster

Implement Big Data Real-Time Processing Solutions

Create Spark streaming applications using DStream API
Define DStreams and compare them to Resilient Distributed Dataset (RDDs), start and stop streaming applications, transform DStream (flatMap, reduceByKey, UpdateStateByKey), persist long-term data stores in HBase and SQL, persist Long Term Data Azure Data Lake and Azure Blob Storage, stream data from Apache Kafka or Event Hub, visualize streaming data in a PowerBI real-time dashboard
Create Spark structured streaming applications
Use DataFrames and DataSets APIs to create streaming DataFrames and Datasets; create Window Operations on Event Time; define Window Transformations for Stateful and Stateless Operations; stream Window Functions, Reduce by Key, and Window to Summarize Streaming Data; persist Long Term Data HBase and SQL; persist Long Term Data Azure Data Lake and Azure Blob Storage; stream data from Kafka or Event Hub; visualize streaming data in a PowerBI real-time dashboard
Develop big data real-time processing solutions with Apache Storm
Create Storm clusters for real-time jobs, persist Long Term Data HBase and SQL, persist Long Term Data Azure Data Lake and Azure Blob Storage, stream data from Kafka or Event Hub, configure event windows in Storm, visualize streaming data in a PowerBI real-time dashboard, define Storm topologies and describe Storm Computation Graph Architecture, create Storm streams and conduct streaming joins, run Storm topologies in local mode for testing, configure Storm applications (Workers, Debug mode), conduct Stream groupings to broadcast tuples across components, debug and monitor Storm jobs
Build solutions that use Kafka
Create Spark and Storm clusters in the virtual network, manage partitions, configure MirrorMaker, start and stop services through Ambari, manage topics
Build solutions that use HBase
Identify HBase use cases in HDInsight, use HBase Shell to create updates and drop HBase tables, monitor an HBase cluster, optimize the performance of an HBase cluster, identify uses cases for using Phoenix for analytics of real-time data, implement replication in HBase

QUESTION 2
Note: This question is part of a series of questions that present the same Scenario. Each question I the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution while others might not have correct solution.
You are implementing a batch processing solution by using Azure HDlnsight. You have a table that contains sales data.
You plan to implement a query that will return the number of orders by zip code.

You need to minimize the execution time of the queries and to maximize the compression level of the resulting data.
What should you do?

A. Use a shuffle join in an Apache Hive query that stores the data in a JSON format.
B. Use a broadcast join in an Apache Hive query that stores the data in an ORC format.
C. Increase the number of spark.executor.cores in an Apache Spark job that stores the data in a text format.
D. Increase the number of spark.executor.instances in an Apache Spark job that stores the data in a text format.
E. Decrease the level of parallelism in an Apache Spark job that Mores the data in a text format.
F. Use an action in an Apache Oozie workflow that stores the data in a text format.
G. Use an Azure Data Factory linked service that stores the data in Azure Data lake.
H. Use an Azure Data Factory linked service that stores the data In an Azure DocumentDB database.

Answer: B


QUESTION 3
Note: This question is part of a series of questions that present the same Scenario. Each question I the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution while others might not have correct solution.
You are building a security tracking solution in Apache Kafka to parse Security logs. The Security logs record an entry each time a user attempts to access an application. Each log entry contains the IP address used to make the attempt and the country from which the attempt originated.
You need to receive notifications when an IP address from outside of the United States is used to access the application.
Solution: Create two new consumers. Create a file import process to send messages. Start the producer.
Does this meet the goal?

A. Yes
B. No

Answer: B


QUESTION 4
You have an Azure HDInsight cluster.
You need to store data in a file format that maximizes compression and increases read performance.
Which type of file format should you use?

A. ORC
B. Apache Parquet
C. Apache Avro
D. Apache Sequence

Answer: A

Explanation: https://docs.microsoft.com/en-us/azure/data-factory/data-factory-supported-file-and-compression-formats

Click here to view complete Q&A of 70-775 exam
Certkingdom Review
, Certkingdom PDF

MCTS Training, MCITP Trainnig

Best Microsoft MCSE 70-775 Certification, Microsoft 70-775 Training at certkingdom.com

Click to rate this post!
[Total: 0 Average: 0]

Author: admin

Hi I educated in the U.K. with working experienced for 18 years in multinational companies, As an IT Manager and IT Instructor, I am attached with certkingdom.com here they provide IT exams study material, the study materials included exams Q&A with Explanation, Study Guides, Training Labs, Exams Simulations, Training Videos, etc. for certification like MCSE 2003 Training, MCITP Training, http://www.certkingdom.com, CCNA exams preparation, CompTIA A+ Training, and more Certkingdom.com provide you the best training 100% guarantee. “Best Material Great Results”

Leave a Reply

Your email address will not be published. Required fields are marked *