exam questions

Exam Professional Machine Learning Engineer All Questions

View all questions & answers for the Professional Machine Learning Engineer exam

Exam Professional Machine Learning Engineer topic 1 question 319 discussion

Actual exam question from Google's Professional Machine Learning Engineer
Question #: 319
Topic #: 1
[All Professional Machine Learning Engineer Questions]

You work as an ML researcher at an investment bank, and you are experimenting with the Gemma large language model (LLM). You plan to deploy the model for an internal use case. You need to have full control of the mode's underlying infrastructure and minimize the model's inference time. Which serving configuration should you use for this task?

  • A. Deploy the model on a Vertex AI endpoint manually by creating a custom inference container.
  • B. Deploy the model on a Google Kubernetes Engine (GKE) cluster by using the deployment options in Model Garden.
  • C. Deploy the model on a Vertex AI endpoint by using one-click deployment in Model Garden.
  • D. Deploy the model on a Google Kubernetes Engine (GKE) cluster manually by cresting a custom yaml manifest.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
tmpuserx
1 week, 2 days ago
Selected Answer: D
Model Garden deployment options typically use default configurations not optimized for low latency
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago