exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 309 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 309
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You are developing a natural language processing model that analyzes customer feedback to identify positive, negative, and neutral experiences. During the testing phase, you notice that the model demonstrates a significant bias against certain demographic groups, leading to skewed analysis results. You want to address this issue following Google's responsible AI practices. What should you do?

  • A. Use Vertex AI's model evaluation lo assess bias in the model's predictions, and use post-processing to adjust outputs for identified demographic discrepancies.
  • B. Implement a more complex model architecture that can capture nuanced patterns in language to reduce bias.
  • C. Audit the training dataset to identify underrepresented groups and augment the dataset with additional samples before retraining the model.
  • D. Use Vertex Explainable AI to generate explanations and systematically adjust the predictions to address identified biases.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
qaz09
1 month ago
Selected Answer: A
A -> here we are using Google's recommended tool for bias evaluation (https://cloud.google.com/vertex-ai/docs/evaluation/model-bias-metrics) B -> using more complex model does not address bias directly c -> manual work + we can not expect that there is an option to add more samples to training dataset d -> adjusting predictions is manual work
upvoted 1 times
spradhan
2 weeks, 5 days ago
Yes but we cannot adjust the output in post processing. Preprocessing mitigation would make sense. As per google https://developers.google.com/machine-learning/crash-course/fairness/mitigating-bias Bias can be mitigated by augmenting the data or getting more data or changing loss function
upvoted 1 times
...
...
hit_cloudie
2 months, 3 weeks ago
Selected Answer: C
Google's Responsible AI best practices prioritize dataset auditing and fairness through balanced representation. This helps address the root cause of the bias (biased training data).
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...