exam questions

Exam DP-100 All Questions

View all questions & answers for the DP-100 exam

Exam DP-100 topic 2 question 93 discussion

Actual exam question from Microsoft's DP-100
Question #: 93
Topic #: 2
[All DP-100 Questions]

HOTSPOT
-

You manage an Azure Machine Learning workspace by using the Python SDK v2.

You must create an automated machine learning job to generate a classification model by using data files stored in Parquet format.

You must configure an autoscaling compute target and a data asset for the job.

You need to configure the resources for the job.

Which resource configuration should you use? To answer, select the appropriate options in the answer area.

NOTE: Each correct selection is worth one point.

Show Suggested Answer Hide Answer
Suggested Answer:

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
f11c733
Highly Voted 11 months, 3 weeks ago
The correct answers are Azure Databricks and mltable.
upvoted 5 times
...
LadyCasilda
Highly Voted 1 year, 9 months ago
On exam 18 August 2023
upvoted 5 times
...
Plb2
Most Recent 1 year, 3 months ago
https://learn.microsoft.com/en-us/azure/machine-learning/how-to-mltable?view=azureml-api-2&tabs=cli Azure Machine Learning doesn't require use of Azure Machine Learning Tables (mltable) for tabular data. You can use Azure Machine Learning File (uri_file) and Folder (uri_folder) types, and your own parsing logic loads the data into a Pandas or Spark data frame. If you have a simple CSV file or Parquet folder, it's easier to use Azure Machine Learning Files/Folders instead of Tables.
upvoted 2 times
...
Lion007
1 year, 5 months ago
Correct: Azure Databricks and uri_folder Compute target: Azure Databricks This is because Azure Databricks supports autoscaling of workers required to run your job, which dynamically reallocates workers to match the computational demands of your job, thereby achieving high cluster utilization without the need for provisioning the cluster to match a specific workload​​. Data asset: uri_folder This option allows the machine learning job to access all the Parquet files stored in the specified directory. If you have multiple Parquet data files, you would use a URI that points to a folder containing all these files.
upvoted 4 times
...
damaldon
1 year, 11 months ago
Correct. uri_folder Read a folder of parquet/CSV files into Pandas/Spark.
upvoted 3 times
...
Batman160591
1 year, 11 months ago
Seems correct:)
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...