Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.

Unlimited Access

Get Unlimited Contributor Access to the all ExamTopics Exams!
Take advantage of PDF Files for 1000+ Exams along with community discussions and pass IT Certification Exams Easily.

Exam AWS Certified Machine Learning - Specialty topic 1 question 54 discussion

Which of the following metrics should a Machine Learning Specialist generally use to compare/evaluate machine learning classification models against each other?

  • A. Recall
  • B. Misclassification rate
  • C. Mean absolute percentage error (MAPE)
  • D. Area Under the ROC Curve (AUC)
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
DonaldCMLIN
Highly Voted 2 years, 9 months ago
RECALL IS ONE OF FACTOR IN CLASSIFY, AUC IS MORE FACTORS TO COMPREHENSIVE JUDGEMENT https://docs.aws.amazon.com/zh_tw/machine-learning/latest/dg/cross-validation.html ANSWER MIGHT BE D.
upvoted 38 times
DScode
2 years, 8 months ago
Not might be, but should be D
upvoted 5 times
...
devsean
2 years, 9 months ago
AUC is to determine hyperparams in a single model, not compare different models.
upvoted 6 times
...
...
harmanbirstudy
Highly Voted 2 years, 7 months ago
AUC/ROC work well with special case of Binary Classification not in general
upvoted 5 times
MohamedSharaf
2 years, 7 months ago
AUC is to compare different models in terms of their separation power. 0.5 is useless as it's the diagonal line. 1 is perfect. I would go with F1 Score if it was an option. However, taking Recall only as a metric for comparing between models, would be misleading.
upvoted 4 times
...
...
AsusTuf
Most Recent 8 months, 1 week ago
why not C?
upvoted 1 times
Scrook
1 month ago
it's a classification problem, mape is for regression
upvoted 1 times
...
...
Mickey321
9 months, 3 weeks ago
Selected Answer: D
option D
upvoted 1 times
...
Valcilio
1 year, 3 months ago
Selected Answer: D
AUC is the best metric.
upvoted 1 times
...
AjoseO
1 year, 4 months ago
Selected Answer: D
Area Under the ROC Curve (AUC) is a commonly used metric to compare and evaluate machine learning classification models against each other. The AUC measures the model's ability to distinguish between positive and negative classes, and its performance across different classification thresholds. The AUC ranges from 0 to 1, with a score of 1 representing a perfect classifier and a score of 0.5 representing a classifier that is no better than random. While recall is an important evaluation metric for classification models, it alone is not sufficient to compare and evaluate different models against each other. Recall measures the proportion of actual positive cases that are correctly identified as positive, but does not take into account the false positive rate.
upvoted 4 times
ccpmad
10 months, 3 weeks ago
chatgpt answers, all your answers are from chatgpt
upvoted 1 times
...
...
cloud_trail
2 years, 7 months ago
D. AUC is always used to compare ML classification models. The others can all be misleading. Consider the cases where classes are highly imbalanced. In those cases accuracy, misclassification rate and the like are useless. Recall is only useful if used in combination with precision or specificity, which what AUC does.
upvoted 4 times
...
harmanbirstudy
2 years, 7 months ago
Its Accuracy,Precision,Recall and F1 score , there is no metion of AUC/ROC for comparing models in many articles , so ANSWER is A
upvoted 1 times
DavidRou
9 months, 1 week ago
When you draw the ROC graph, you're considering True and False Positive Rate. The first one is also called Recall ;)
upvoted 1 times
...
...
Thai_Xuan
2 years, 7 months ago
D. AUC is scale- and threshold-invariant, enabling it compare models. https://towardsdatascience.com/how-to-evaluate-a-classification-machine-learning-model-d81901d491b1
upvoted 1 times
...
johnny_chick
2 years, 8 months ago
Actually A, B and D seem to be correct
upvoted 1 times
...
deep_n
2 years, 8 months ago
Probably D https://towardsdatascience.com/metrics-for-evaluating-machine-learning-classification-models-python-example-59b905e079a5
upvoted 2 times
...
hughhughhugh
2 years, 8 months ago
why not B?
upvoted 1 times
...
PRC
2 years, 8 months ago
Answer should be D..ROC is used to determine the diagnostic capability of classification model varying on threshold
upvoted 3 times
...
Hypermasterd
2 years, 8 months ago
Should be A. A is the only one that generally works for classifcation. AUC only works with binary classification.
upvoted 4 times
sebas10
2 years, 8 months ago
Could be, you mean in a multiclass clasification problem. But in that con context recall directly can't be compare because first you have to decide recall of what of the classes, in a 3 classes problem we have 3 recalls or you suppose a weighted recall or average recall ?. Do you think in that ?
upvoted 2 times
mrsimoes
2 years, 8 months ago
Also in multi-class classification, if you follow an One-vs_Rest strategy you can still use AUC. https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc.html#sphx-glr-auto-examples-model-selection-plot-roc-py
upvoted 1 times
...
...
oMARKOo
2 years, 7 months ago
Actually AUC could be generalized for multi-class problem. https://www.datascienceblog.net/post/machine-learning/performance-measures-multi-class-problems/
upvoted 1 times
...
...
stamarpadar
2 years, 8 months ago
Correct Answer is D. Another benefit of using AUC is that it is classification-threshold-invariant like log loss. https://towardsdatascience.com/the-5-classification-evaluation-metrics-you-must-know-aa97784ff226
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...