exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 112 discussion

A manufacturer is operating a large number of factories with a complex supply chain relationship where unexpected downtime of a machine can cause production to stop at several factories. A data scientist wants to analyze sensor data from the factories to identify equipment in need of preemptive maintenance and then dispatch a service team to prevent unplanned downtime. The sensor readings from a single machine can include up to 200 data points including temperatures, voltages, vibrations, RPMs, and pressure readings.
To collect this sensor data, the manufacturer deployed Wi-Fi and LANs across the factories. Even though many factory locations do not have reliable or high- speed internet connectivity, the manufacturer would like to maintain near-real-time inference capabilities.
Which deployment architecture for the model will address these business requirements?

  • A. Deploy the model in Amazon SageMaker. Run sensor data through this model to predict which machines need maintenance.
  • B. Deploy the model on AWS IoT Greengrass in each factory. Run sensor data through this model to infer which machines need maintenance.
  • C. Deploy the model to an Amazon SageMaker batch transformation job. Generate inferences in a daily batch report to identify machines that need maintenance.
  • D. Deploy the model in Amazon SageMaker and use an IoT rule to write data to an Amazon DynamoDB table. Consume a DynamoDB stream from the table with an AWS Lambda function to invoke the endpoint.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
[Removed]
Highly Voted 2 years, 7 months ago
I would select B, based on the following AWS examples: https://aws.amazon.com/blogs/iot/industrial-iot-from-condition-based-monitoring-to-predictive-quality-to-digitize-your-factory-with-aws-iot-services/ https://aws.amazon.com/blogs/iot/using-aws-iot-for-predictive-maintenance/
upvoted 26 times
...
SophieSu
Highly Voted 2 years, 7 months ago
B is my answer. For latency-sensitive use cases and for use-cases that require analyzing large amounts of streaming data, it may not be possible to run ML inference in the cloud. Besides, cloud-connectivity may not be available all the time. For these use cases, you need to deploy the ML model close to the data source. SageMaker Neo + IoT GreenGrass To design and push something to edge: 1. design something to do the job, say TF model 2. compile it for the edge device using SageMaker Neo, say Nvidia Jetson 3. run it on the edge using IoT GreenGrass
upvoted 17 times
...
Mickey321
Most Recent 8 months, 3 weeks ago
Selected Answer: B
without relying on internet connectivity.
upvoted 2 times
...
Peeking
1 year, 5 months ago
Selected Answer: B
The described solution will be solved by an edge solution as internet relability is low. IoT Greengrass is the best solution for the edge inference.
upvoted 3 times
...
ovokpus
1 year, 10 months ago
Selected Answer: B
This is an edge solution, having as little traffic with AWS resources in regions. For this, start thinking IoT Greengrass and Sagemaker Neo, and you'll be halfway there. Answer is B, no doubt
upvoted 3 times
...
apprehensive_scar
2 years, 3 months ago
B is the answer, obviously
upvoted 1 times
...
[Removed]
2 years, 5 months ago
Selected Answer: B
This solution requires edge capabilities and to be able to run the inference models in near real-time. SageMaker Neo is a deployable unit on the edge architecture (IoT Greengrass) which can host the runtime inference model.
upvoted 4 times
...
mahmoudai
2 years, 7 months ago
A: not a complete solution a lot of details is missed C: daily batch training is huge defect in this solution D: writing to dynamoDB and invoking endpoint make this solution slower than using an IoT Green Grass Answer: B
upvoted 1 times
...
Vita_Rasta84444
2 years, 7 months ago
I would choose B because IoT reduce latency because they work on local machine
upvoted 1 times
...
astonm13
2 years, 7 months ago
I would choose B
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago