exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 54 discussion

Which of the following metrics should a Machine Learning Specialist generally use to compare/evaluate machine learning classification models against each other?

  • A. Recall
  • B. Misclassification rate
  • C. Mean absolute percentage error (MAPE)
  • D. Area Under the ROC Curve (AUC)
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
DonaldCMLIN
Highly Voted 3 years, 1 month ago
RECALL IS ONE OF FACTOR IN CLASSIFY, AUC IS MORE FACTORS TO COMPREHENSIVE JUDGEMENT https://docs.aws.amazon.com/zh_tw/machine-learning/latest/dg/cross-validation.html ANSWER MIGHT BE D.
upvoted 38 times
devsean
3 years, 1 month ago
AUC is to determine hyperparams in a single model, not compare different models.
upvoted 6 times
...
DScode
3 years ago
Not might be, but should be D
upvoted 5 times
...
...
AjoseO
Highly Voted 1 year, 8 months ago
Selected Answer: D
Area Under the ROC Curve (AUC) is a commonly used metric to compare and evaluate machine learning classification models against each other. The AUC measures the model's ability to distinguish between positive and negative classes, and its performance across different classification thresholds. The AUC ranges from 0 to 1, with a score of 1 representing a perfect classifier and a score of 0.5 representing a classifier that is no better than random. While recall is an important evaluation metric for classification models, it alone is not sufficient to compare and evaluate different models against each other. Recall measures the proportion of actual positive cases that are correctly identified as positive, but does not take into account the false positive rate.
upvoted 5 times
ccpmad
1 year, 3 months ago
chatgpt answers, all your answers are from chatgpt
upvoted 2 times
...
...
AsusTuf
Most Recent 1 year ago
why not C?
upvoted 1 times
Scrook
5 months, 3 weeks ago
it's a classification problem, mape is for regression
upvoted 2 times
...
...
Mickey321
1 year, 2 months ago
Selected Answer: D
option D
upvoted 1 times
...
Valcilio
1 year, 7 months ago
Selected Answer: D
AUC is the best metric.
upvoted 1 times
...
cloud_trail
3 years ago
D. AUC is always used to compare ML classification models. The others can all be misleading. Consider the cases where classes are highly imbalanced. In those cases accuracy, misclassification rate and the like are useless. Recall is only useful if used in combination with precision or specificity, which what AUC does.
upvoted 4 times
...
harmanbirstudy
3 years ago
AUC/ROC work well with special case of Binary Classification not in general
upvoted 5 times
MohamedSharaf
3 years ago
AUC is to compare different models in terms of their separation power. 0.5 is useless as it's the diagonal line. 1 is perfect. I would go with F1 Score if it was an option. However, taking Recall only as a metric for comparing between models, would be misleading.
upvoted 4 times
...
...
harmanbirstudy
3 years ago
Its Accuracy,Precision,Recall and F1 score , there is no metion of AUC/ROC for comparing models in many articles , so ANSWER is A
upvoted 1 times
DavidRou
1 year, 1 month ago
When you draw the ROC graph, you're considering True and False Positive Rate. The first one is also called Recall ;)
upvoted 1 times
...
...
Thai_Xuan
3 years ago
D. AUC is scale- and threshold-invariant, enabling it compare models. https://towardsdatascience.com/how-to-evaluate-a-classification-machine-learning-model-d81901d491b1
upvoted 1 times
...
johnny_chick
3 years ago
Actually A, B and D seem to be correct
upvoted 1 times
...
deep_n
3 years ago
Probably D https://towardsdatascience.com/metrics-for-evaluating-machine-learning-classification-models-python-example-59b905e079a5
upvoted 2 times
...
hughhughhugh
3 years ago
why not B?
upvoted 1 times
...
PRC
3 years ago
Answer should be D..ROC is used to determine the diagnostic capability of classification model varying on threshold
upvoted 3 times
...
Hypermasterd
3 years, 1 month ago
Should be A. A is the only one that generally works for classifcation. AUC only works with binary classification.
upvoted 4 times
oMARKOo
3 years ago
Actually AUC could be generalized for multi-class problem. https://www.datascienceblog.net/post/machine-learning/performance-measures-multi-class-problems/
upvoted 1 times
...
sebas10
3 years ago
Could be, you mean in a multiclass clasification problem. But in that con context recall directly can't be compare because first you have to decide recall of what of the classes, in a 3 classes problem we have 3 recalls or you suppose a weighted recall or average recall ?. Do you think in that ?
upvoted 2 times
mrsimoes
3 years ago
Also in multi-class classification, if you follow an One-vs_Rest strategy you can still use AUC. https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html#sphx-glr-auto-examples-model-selection-plot-roc-py
upvoted 1 times
...
...
...
stamarpadar
3 years, 1 month ago
Correct Answer is D. Another benefit of using AUC is that it is classification-threshold-invariant like log loss. https://towardsdatascience.com/the-5-classification-evaluation-metrics-you-must-know-aa97784ff226
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago