exam questions

Exam DP-100 All Questions

View all questions & answers for the DP-100 exam

Exam DP-100 topic 3 question 4 discussion

Actual exam question from Microsoft's DP-100
Question #: 4
Topic #: 3
[All DP-100 Questions]

Note: This question is part of a series of questions that present the same scenario. Each question in the series contains a unique solution that might meet the stated goals. Some question sets might have more than one correct solution, while others might not have a correct solution.
After you answer a question in this section, you will NOT be able to return to it. As a result, these questions will not appear in the review screen.
You are using Azure Machine Learning to run an experiment that trains a classification model.
You want to use Hyperdrive to find parameters that optimize the AUC metric for the model. You configure a HyperDriveConfig for the experiment by running the following code:

You plan to use this configuration to run a script that trains a random forest model and then tests it with validation data. The label values for the validation data are stored in a variable named y_test variable, and the predicted probabilities from the model are stored in a variable named y_predicted.
You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric.
Solution: Run the following code:

Does the solution meet the goal?

  • A. Yes
  • B. No
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️
Explanation -
Use a solution with logging.info(message) instead.
Note: Python printing/logging example:
logging.info(message)
Destination: Driver logs, Azure Machine Learning designer
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
chaudha4
Highly Voted 3 years, 6 months ago
THe question is not about just logging AUC but logging to allow Hyperdrive to optimize hyperparameters for the AUC metric. So you must log using run instance. That way the Hyperdrive has access to that metric to compare with other runs. SO the correct answer is "No"
upvoted 18 times
...
Narendra05
Highly Voted 3 years, 4 months ago
run.log() is the correct answer https://docs.microsoft.com/en-us/azure/machine-learning/how-to-log-view-metrics
upvoted 9 times
...
evangelist
Most Recent 5 months ago
Selected Answer: B
# Get the current run context run = Run.get_context() # Log the AUC score run.log("AUC", auc)
upvoted 1 times
...
synapse
2 years, 7 months ago
Selected Answer: B
Copying: THe question is not about just logging AUC but logging to allow Hyperdrive to optimize hyperparameters for the AUC metric. So you must log using run instance. That way the Hyperdrive has access to that metric to compare with other runs. SO the correct answer is "No"
upvoted 1 times
...
azurecert2021
3 years, 4 months ago
question is about "You need to add logging to the script to allow Hyperdrive to optimize hyperparameters for the AUC metric." so if we go through following links we use run.log to log np.float(reg) whereas printf is used for general debugging. # Train a logistic regression model print('Training a logistic regression model with regularization rate of', reg) run.log('Regularization Rate', np.float(reg)) model = LogisticRegression(C=1/reg, solver="liblinear").fit(X_train, y_train) https://github.com/MicrosoftLearning/DP100/blob/master/08A%20-%20Tuning%20Hyperparameters.ipynb https://sites.google.com/view/raybellwaves/courses/build-ai-solutions-with-azure-machine-learning
upvoted 2 times
...
anjurad
3 years, 6 months ago
for hyperdrive to optimise, it has to extract the chosen metric from the experiment run, through what has been logged. The log name has to match the primary metric name specified in config. The values aren't being logged in the example script - and printing doesn't capture the key/value pairs required to do the matching and comparison
upvoted 3 times
...
levm39
3 years, 6 months ago
the print statement can be used to debug, but in this piece of code you are only printing np.float(AUC), so you are only printing the conversion of a value to float, you are not printing any debugging information from the algorithm.
upvoted 4 times
...
dev2dev
3 years, 7 months ago
Answer is Yes. We can use either logger or print as per the referenced document print(val) logging.info(message)
upvoted 3 times
stonefl
3 years, 7 months ago
yes, agree. Correct answer should be A.
upvoted 1 times
Anty85
3 years, 7 months ago
Indeed. https://docs.microsoft.com/en-us/azure/machine-learning/how-to-debug-pipelines - under "Logging options and behaviour".
upvoted 1 times
cab123
3 years, 6 months ago
but this is not for debugging but to use on hyperdrive
upvoted 9 times
VJPrakash
3 years, 2 months ago
When we are able to debug, will you not be able to extract that as well ? documentation says, its logged to -Driver logs, Azure Machine Learning designer
upvoted 1 times
...
...
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago