exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 101 discussion

A technology startup is using complex deep neural networks and GPU compute to recommend the company's products to its existing customers based upon each customer's habits and interactions. The solution currently pulls each dataset from an Amazon S3 bucket before loading the data into a TensorFlow model pulled from the company's Git repository that runs locally. This job then runs for several hours while continually outputting its progress to the same S3 bucket. The job can be paused, restarted, and continued at any time in the event of a failure, and is run from a central queue.
Senior managers are concerned about the complexity of the solution's resource management and the costs involved in repeating the process regularly. They ask for the workload to be automated so it runs once a week, starting Monday and completing by the close of business Friday.
Which architecture should be used to scale the solution at the lowest cost?

  • A. Implement the solution using AWS Deep Learning Containers and run the container as a job using AWS Batch on a GPU-compatible Spot Instance
  • B. Implement the solution using a low-cost GPU-compatible Amazon EC2 instance and use the AWS Instance Scheduler to schedule the task
  • C. Implement the solution using AWS Deep Learning Containers, run the workload using AWS Fargate running on Spot Instances, and then schedule the task using the built-in task scheduler
  • D. Implement the solution using Amazon ECS running on Spot Instances and schedule the task using the ECS service scheduler
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jdstone
Highly Voted 3 years, 8 months ago
Answer is A https://aws.amazon.com/blogs/compute/gpu-workloads-on-aws-batch/
upvoted 27 times
Juka3lj
3 years, 8 months ago
Makes most sense
upvoted 2 times
...
...
astonm13
Highly Voted 3 years, 8 months ago
I would go for D. As far as I know Fargate does not support GPU computing.
upvoted 6 times
Bhadu
1 year, 11 months ago
It does support GPU https://docs.aws.amazon.com/batch/latest/userguide/fargate.html
upvoted 1 times
fa0d8b7
1 year, 6 months ago
this is wrong information. It does not support GPU
upvoted 1 times
...
...
teka112233
1 year, 9 months ago
the problem that fargate is serverless which mean you can't control its compute capabilities
upvoted 1 times
...
...
MultiCloudIronMan
Most Recent 8 months, 4 weeks ago
Selected Answer: A
To scale the solution at the lowest cost, the best architecture would be Option A: Implement the solution using AWS Deep Learning Containers and run the container as a job using AWS Batch on a GPU-compatible Spot Instance. This approach leverages AWS Batch to manage the job scheduling and execution, while using Spot Instances to significantly reduce costs12. Would you like more details on how to set this up or any other aspect of optimizing your architecture?
upvoted 1 times
...
chewasa
1 year, 3 months ago
Selected Answer: A
fargate doesnt support GPU. https://github.com/aws/containers-roadmap/issues/88
upvoted 1 times
...
endeesa
1 year, 6 months ago
Selected Answer: A
AWS batch will easily satisfy the requiremtns
upvoted 1 times
...
windy9
1 year, 8 months ago
Fargate doesn't support GPU. So go with AWS Batch and DLC (Deep Learning Container)
upvoted 1 times
...
loict
1 year, 9 months ago
Selected Answer: C
A. NO - Fargate provides batch functionnalities already fully integrated with ECS B. NO - too low level C. YES - AWS Deep Learning Containers are optimized; AWS Fargate is serverless (so less ops complexity); Spot best for cost D. NO - ECS service scheduler is not serverless
upvoted 1 times
khchan123
1 year, 7 months ago
Answer is A. C is not correct. GPU resources aren't supported for jobs that run on Fargate resources.
upvoted 1 times
...
...
Shenannigan
1 year, 9 months ago
A and C are both great answers but when it comes to cost I believe A is the more cost effective solution. So A is my answer
upvoted 1 times
...
Mickey321
1 year, 9 months ago
Selected Answer: A
Automate the workload by scheduling the job to run once a week using AWS Batch’s built-in scheduler or a cron expression. Optimize the performance by using AWS Deep Learning Containers that are tailored for GPU acceleration and deep learning frameworks. Reduce the cost by using Spot Instances that offer significant savings compared to On-Demand Instances. Handle failures by using AWS Batch’s retry strategies that can automatically restart the job on a different instance if the Spot Instance is interrupted.
upvoted 1 times
...
injoho
2 years, 2 months ago
Answer is A. (But the question is tricky) A and D are both correct solutions but pay attention to the words - "Senior managers are concerned about the complexity of the solution's resource management and the costs". With Cost is everything simple - use Spot instances, with resource management - use higher abstraction servicr AWS Batch is a management/abstraction layer on top of ECS and EC2 (and some other AWS resources). It does some things for you, like cost optimization, that can be difficult to do yourself. Think of it like Elastic Beanstalk for batch operations. It provides a management layer on top of lower-level AWS resources, but if you are comfortable managing those lower level resources yourself and want more control over them it is certainly an option to use those lower-level resources directly.
upvoted 4 times
...
Mllb
2 years, 2 months ago
Selected Answer: C
Why not C? AWS Fargate is oriented to manage resources as you need.
upvoted 1 times
...
austinoy
2 years, 2 months ago
A looks good to me https://aws.amazon.com/blogs/compute/deep-learning-on-aws-batch/
upvoted 1 times
...
drcok87
2 years, 4 months ago
b: for those who think its B because of spot instance interruption, ready question phrase "The job can be paused, restarted, and continued at any time in the event of a failure, and is run from a central queue." between a and c: at the time of this question i doubt if fargate supported GPU, even if it did I choose aws batch for job and fargate for services/apps that need to run all the time. a is answer
upvoted 2 times
...
AjoseO
2 years, 4 months ago
Selected Answer: A
Option A is the most cost-effective architecture as it uses GPU-compatible Spot Instance which is the lowest cost compute option for GPU instances in the AWS cloud. AWS Batch is a fully managed service that schedules, runs, and manages the processing and analysis of batch workloads. The use of AWS Deep Learning Containers enables the technology startup to use pre-built, optimized Docker containers for deep learning, which reduces the complexity of the solution's resource management and eliminates the need for repeated processing.
upvoted 2 times
...
Aninina
2 years, 6 months ago
Selected Answer: A
Answer is A. Option B is similar to A, but it uses a low-cost GPU-compatible EC2 instance rather than a container, which may not be as flexible or scalable as using containers.
upvoted 4 times
...
matteocal
2 years, 10 months ago
Selected Answer: A
Answer is A https://aws.amazon.com/blogs/compute/gpu-workloads-on-aws-batch/
upvoted 2 times
...
ovokpus
2 years, 11 months ago
Selected Answer: A
https://aws.amazon.com/blogs/compute/gpu-workloads-on-aws-batch/ There you have it
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...