exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 304 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 304
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You are an AI engineer that works for a popular video streaming platform. You built a classification model using PyTorch to predict customer churn. Each week, the customer retention team plans to contact customers that have been identified as at risk of churning with personalized offers. You want to deploy the model while minimizing maintenance effort. What should you do?

  • A. Use Vertex AI’s prebuilt containers for prediction. Deploy the container on Cloud Run to generate online predictions.
  • B. Use Vertex AI’s prebuilt containers for prediction. Deploy the model on Google Kubernetes Engine (GKE), and configure the model for batch prediction.
  • C. Deploy the model to a Vertex AI endpoint, and configure the model for batch prediction. Schedule the batch prediction to run weekly.
  • D. Deploy the model to a Vertex AI endpoint, and configure the model for online prediction. Schedule a job to query this endpoint weekly.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
yokoyan
1 month, 4 weeks ago
Selected Answer: C
(Gemini Explanation) Vertex AI Batch Prediction: This service is specifically designed for batch inference, making it ideal for processing large datasets and generating predictions offline. Scheduled Jobs: Vertex AI allows you to schedule batch prediction jobs, automating the weekly process and eliminating the need for manual intervention. Minimized Maintenance: Vertex AI handles the underlying infrastructure, reducing the maintenance burden compared to managing a Kubernetes cluster or manually querying an online endpoint. Cost Efficiency: Batch prediction is generally more cost-effective for large-scale offline processing than repeatedly querying an online endpoint.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago