exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 178 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 178
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work for a bank. You have created a custom model to predict whether a loan application should be flagged for human review. The input features are stored in a BigQuery table. The model is performing well, and you plan to deploy it to production. Due to compliance requirements the model must provide explanations for each prediction. You want to add this functionality to your model code with minimal effort and provide explanations that are as accurate as possible. What should you do?

  • A. Create an AutoML tabular model by using the BigQuery data with integrated Vertex Explainable AI.
  • B. Create a BigQuery ML deep neural network model and use the ML.EXPLAIN_PREDICT method with the num_integral_steps parameter.
  • C. Upload the custom model to Vertex AI Model Registry and configure feature-based attribution by using sampled Shapley with input baselines.
  • D. Update the custom serving container to include sampled Shapley-based explanations in the prediction outputs.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
fitri001
6 months, 1 week ago
Selected Answer: C
Existing Custom Model: This approach leverages your already-developed, well-performing model. There's no need to rebuild it using AutoML or BigQuery ML, which might require significant code changes. Vertex Explainable AI (XAI): Vertex AI offers XAI integration with custom models through feature-based attribution methods like sampled Shapley. This provides explanations for each prediction without requiring major modifications to your model code. Sampled Shapley with Baselines: Sampled Shapley is a robust attribution method for explaining model predictions. Using input baselines (like zero values) helps improve the interpretability of explanations, especially for features with large ranges.
upvoted 2 times
...
guilhermebutzke
8 months, 3 weeks ago
Selected Answer: C
According to the documentation at https://cloud.google.com/vertex-ai/docs/explainable-ai/overview, we can utilize both feature-based attribution and sampled Shapley-based explanations. Therefore, for providing explanations for each prediction in a loan classification problem, I believe that feature-based attribution is the optimal approach. Furthermore, updating the custom serving container to include sampled Shapley-based explanations, as suggested in option D, might require more effort, considering that the custom model deployed on Vertex AI already provides this option for explanations.
upvoted 3 times
...
sonicclasps
8 months, 4 weeks ago
Selected Answer: C
"minimal effort and provide explanations that are as accurate as possible" this makes the answer C, based on this: https://cloud.google.com/vertex-ai/docs/explainable-ai/improving-explanations
upvoted 2 times
...
daidai75
9 months ago
Selected Answer: C
Feature attribution is supported for all types of models (both AutoML and custom-trained), frameworks (TensorFlow, scikit, XGBoost), BigQuery ML models, and modalities (images, text, tabular, video). https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
upvoted 3 times
...
36bdc1e
9 months, 3 weeks ago
C you find the answer here https://cloud.google.com/vertex-ai/docs/explainable-ai/overview
upvoted 2 times
...
b1a8fae
9 months, 3 weeks ago
Selected Answer: D
pikachu007 answer made me reconsider
upvoted 1 times
daidai75
9 months ago
https://cloud.google.com/vertex-ai/docs/explainable-ai/overview. According to this web link, Feature attribution is supported for all types of models (both AutoML and custom-trained), frameworks (TensorFlow, scikit, XGBoost), BigQuery ML models, and modalities (images, text, tabular, video).
upvoted 1 times
...
...
b1a8fae
9 months, 3 weeks ago
Selected Answer: A
Not a deep neural network for sure (B). Out of the remaining 3, A is the simplest approach.
upvoted 1 times
...
pikachu007
9 months, 3 weeks ago
Selected Answer: D
A and B is out because you already have a model, C does not provide an explanation for each prediction. Therefore D meets all the criteria.
upvoted 2 times
BlehMaks
9 months, 1 week ago
Why does not C provide an explanation for each prediction? As for me both C and D options provide an explanation for each prediction, the difference is only in the amount of effort required to configure explanations
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago