exam questions

Exam AWS Certified Machine Learning - Specialty All Questions

View all questions & answers for the AWS Certified Machine Learning - Specialty exam

Exam AWS Certified Machine Learning - Specialty topic 1 question 201 discussion

An automotive company is using computer vision in its autonomous cars. The company has trained its models successfully by using transfer learning from a convolutional neural network (CNN). The models are trained with PyTorch through the use of the Amazon SageMaker SDK. The company wants to reduce the time that is required for performing inferences, given the low latency that is required for self-driving.

Which solution should the company use to evaluate and improve the performance of the models?

  • A. Use Amazon CloudWatch algorithm metrics for visibility into the SageMaker training weights, gradients, biases, and activation outputs. Compute the filter ranks based on this information. Apply pruning to remove the low-ranking filters. Set the new weights. Run a new training job with the pruned model.
  • B. Use SageMaker Debugger for visibility into the training weights, gradients, biases, and activation outputs. Adjust the model hyperparameters, and look for lower inference times. Run a new training job.
  • C. Use SageMaker Debugger for visibility into the training weights, gradients, biases, and activation outputs. Compute the filter ranks based on this information. Apply pruning to remove the low-ranking filters. Set the new weights. Run a new training job with the pruned model.
  • D. Use SageMaker Model Monitor for visibility into the ModelLatency metric and OverheadLatency metric of the model after the model is deployed. Adjust the model hyperparameters, and look for lower inference times. Run a new training job.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
loict
8 months ago
Selected Answer: C
A. NO - Must use SageMaker Debugger for visibility into model insights B. NO - Hyperparameters will most likely influence model accuracy but not response time C. YES - SageMaker Debugger is the right tool for model insights; filter (or "kernels") slides in CNN to identify specific features D. NO - SageMaker Model Monitor is for model performance
upvoted 3 times
...
Mickey321
8 months, 4 weeks ago
Selected Answer: C
Pruning is a technique that reduces the complexity of convolutional neural networks (CNNs) by removing unimportant filters or neurons. This can lead to faster inference times and lower memory consumption, which are desirable for self-driving applications. Pruning can be done by ranking the filters based on some criteria, such as the norm of the weights, the activation outputs, or the Taylor expansion of the loss function123.
upvoted 1 times
...
kaike_reis
9 months ago
Selected Answer: C
ChatGPT is an awesome tool, but please ML colleagues: study!
upvoted 3 times
teka112233
8 months ago
you are very right, about how awesome ChatGPT, but since we find it's answers over here, so some colleagues are trying to help us in proving why these could be the right answers without wasting our time to prove it. All the names over here are without any way of connection and most of the names are fictitious, so when the leave their answers, we don't know them but still we know the right answers with the right proof.
upvoted 3 times
...
...
Mickey321
9 months, 2 weeks ago
Selected Answer: C
The company should use solution C. Use SageMaker Debugger for visibility into the training weights, gradients, biases, and activation outputs. Compute the filter ranks based on this information. Apply pruning to remove the low-ranking filters. Set the new weights. Run a new training job with the pruned model.
upvoted 1 times
...
Tony_1406
1 year ago
Selected Answer: C
Same example here: https://aws.amazon.com/blogs/machine-learning/pruning-machine-learning-models-with-amazon-sagemaker-debugger-and-amazon-sagemaker-experiments/
upvoted 2 times
...
Gaby999
1 year ago
Selected Answer: B To reduce the time required for performing inferences in autonomous cars, the automotive company should use SageMaker Debugger for visibility into the training weights, gradients, biases, and activation outputs. They can adjust the model hyperparameters and look for lower inference times. They can also use SageMaker Model Monitor for visibility into the ModelLatency metric and OverheadLatency metric of the model after the model is deployed. However, option C, which suggests computing the filter ranks based on the training outputs and applying pruning to remove the low-ranking filters, is not applicable for transfer learning models since the layers in the pre-trained model are already trained and cannot be changed. Therefore, the correct solution is B.
upvoted 1 times
ccpmad
9 months, 1 week ago
better not use chatgpt without knowing something of AWS, it will trick you
upvoted 3 times
...
...
Valcilio
1 year, 2 months ago
Selected Answer: C
Even if a better machine could help, the problem is about the model, not about the general or the machine in specific.
upvoted 2 times
...
AjoseO
1 year, 2 months ago
Selected Answer: C
Using SageMaker Debugger, the company can monitor the training process and evaluate the performance of the model by computing filter ranks based on information like weights, gradients, biases, and activation outputs. After identifying the low-ranking filters, the company can apply pruning to remove them and set new weights. By doing so, the company can reduce the model size and improve the inference time. Finally, a new training job with the pruned model can be run to verify the performance improvements Not D because Model Monitor is a tool for monitoring the performance of deployed models, and it does not provide any direct feedback or insights into the model training process or ways to improve model inference time. Therefore, while Model Monitor can be useful for monitoring the performance of deployed models, it is not the best choice for evaluating and improving the performance of the models during the training phase, which is what the question is asking for.
upvoted 4 times
...
wolfsong
1 year, 2 months ago
It's between C and D. But I think it's C. C. https://aws.amazon.com/blogs/machine-learning/pruning-machine-learning-models-with-amazon-sagemaker-debugger-and-amazon-sagemaker-experiments/ Everything is there. D: https://aws.amazon.com/premiumsupport/knowledge-center/sagemaker-endpoint-latency/ Here it says use Cloudwatch to view ModelLatency and OverheadLatency, not Model Monitor. I think Model Monitor is just for model performance i.e. drift, bias, accuracy etc.
upvoted 1 times
...
expertguru
1 year, 4 months ago
The answer I guess D per below , they should have said Sagemaker model monitor using cloud watch https://docs.aws.amazon.com/sagemaker/latest/dg/model-monitor.html
upvoted 1 times
...
jim20541
1 year, 4 months ago
C, https://aws.amazon.com/blogs/machine-learning/pruning-machine-learning-models-with-amazon-sagemaker-debugger-and-amazon-sagemaker-experiments/
upvoted 4 times
...
Alphacentavra
1 year, 4 months ago
I would say 'D as a more generic approach than C. The problem can be caused not just filters.
upvoted 1 times
...
BoroJohn
1 year, 4 months ago
The answer is "c" as the question is asking for evaluate and improve the performance of the models? "https://docs.aws.amazon.com/sagemaker/latest/dg/debugger-visualization.html"
upvoted 2 times
...
dunhill
1 year, 5 months ago
I think the answer is D.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago