exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 110 discussion

A company that promotes healthy sleep patterns by providing cloud-connected devices currently hosts a sleep tracking application on AWS. The application collects device usage information from device users. The company's Data Science team is building a machine learning model to predict if and when a user will stop utilizing the company's devices. Predictions from this model are used by a downstream application that determines the best approach for contacting users.
The Data Science team is building multiple versions of the machine learning model to evaluate each version against the company's business goals. To measure long-term effectiveness, the team wants to run multiple versions of the model in parallel for long periods of time, with the ability to control the portion of inferences served by the models.
Which solution satisfies these requirements with MINIMAL effort?

  • A. Build and host multiple models in Amazon SageMaker. Create multiple Amazon SageMaker endpoints, one for each model. Programmatically control invoking different models for inference at the application layer.
  • B. Build and host multiple models in Amazon SageMaker. Create an Amazon SageMaker endpoint configuration with multiple production variants. Programmatically control the portion of the inferences served by the multiple models by updating the endpoint configuration.
  • C. Build and host multiple models in Amazon SageMaker Neo to take into account different types of medical devices. Programmatically control which model is invoked for inference based on the medical device type.
  • D. Build and host multiple models in Amazon SageMaker. Create a single endpoint that accesses multiple models. Use Amazon SageMaker batch transform to control invoking the different models through the single endpoint.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
SophieSu
Highly Voted 3 years ago
B is the correct answer. A/B testing with Amazon SageMaker is required in the Exam. In A/B testing, you test different variants of your models and compare how each variant performs. Amazon SageMaker enables you to test multiple models or model versions behind the `same endpoint` using `production variants`. Each production variant identifies a machine learning (ML) model and the resources deployed for hosting the model. To test multiple models by `distributing traffic` between them, specify the `percentage of the traffic` that gets routed to each model by specifying the `weight` for each `production variant` in the endpoint configuration.
upvoted 43 times
...
[Removed]
Highly Voted 3 years ago
I would answer B, it seems similar to this AWS example: https://docs.aws.amazon.com/sagemaker/latest/dg/model-ab-testing.html#model-testing-target-variant
upvoted 10 times
...
Mickey321
Most Recent 1 year, 2 months ago
Selected Answer: B
Option B
upvoted 1 times
...
AjoseO
1 year, 9 months ago
Selected Answer: B
This solution allows the Data Science team to build and host multiple models in Amazon SageMaker, which is a fully managed service for training, deploying, and managing machine learning models. The team can then create an endpoint configuration with multiple production variants, which are different versions of the models. By programmatically updating the endpoint configuration, the team can control the portion of inferences served by the different models. This allows them to evaluate the models against their business goals and measure their long-term effectiveness without having to make changes at the application layer.
upvoted 3 times
...
Morsa
2 years, 3 months ago
Selected Answer: D
Answer D as it is said “the team intends to run numerous versions in parallel for extended periods of time,” so batch transform
upvoted 1 times
cpal012
1 year, 7 months ago
How can you create a single endpoint for batch transforms? this answer is nonsensical.
upvoted 1 times
VR10
8 months, 3 weeks ago
It is possible to create a single endpoint for AWS Batch transforms. Here are the key steps: Create an interface endpoint for AWS Batch in your VPC using the AWS CLI or console. The endpoint service name will be in the format of com.amazonaws.<region>.batch . When creating the endpoint, assign an IAM role with necessary permissions to make calls to the Batch API. You can then submit batch transform jobs to AWS Batch referencing resources in both public and private subnets of the VPC. The endpoint ensures private connectivity to Batch. The single endpoint allows chaining multiple transforms together in a pipeline efficiently without needing internet access. New transforms can be added without redeploying the endpoint. AWS Batch will automatically provision the required compute environments like EC2 instances or containers to run the transforms and scale as needed based on job requirements.
upvoted 1 times
...
...
...
[Removed]
2 years, 4 months ago
it says,"host a sleep monitoring application", it is the host which means online, not batch, b is correct
upvoted 1 times
...
John_Pongthorn
2 years, 8 months ago
Selected Answer: B
The possibility to alter the percentage of inferences supplied by the models. Which method achieves these criteria with the LEAST amount of effort?
upvoted 4 times
...
apprehensive_scar
2 years, 9 months ago
B. Easy
upvoted 2 times
...
anttan
2 years, 11 months ago
Think anser is D, below is from the Sagemaker doc. "https://docs.aws.amazon.com/sagemaker/latest/dg/batch-transform.html" Use Batch Transform to Test Production Variants To test different models or various hyperparameter settings, create a separate transform job for each new model variant and use a validation dataset. For each transform job, specify a unique model name and location in Amazon S3 for the output file. To analyze the results, use Inference Pipeline Logs and Metrics.
upvoted 4 times
[Removed]
2 years, 11 months ago
The question talks about the LEAST amount of effort. In this case, there will be as many transform jobs required to be built as there are variants. That may not be the least amount of effort.
upvoted 2 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago