exam questions

Exam AWS Certified Big Data - Specialty All Questions

View all questions & answers for the AWS Certified Big Data - Specialty exam

Exam AWS Certified Big Data - Specialty topic 2 question 2 discussion

Exam question from Amazon's AWS Certified Big Data - Specialty
Question #: 2
Topic #: 2
[All AWS Certified Big Data - Specialty Questions]

How should an Administrator BEST architect a large multi-layer Long Short-Term Memory (LSTM) recurrent neural network (RNN) running with MXNET on
Amazon EC2? (Choose two.)

  • A. Use data parallelism to partition the workload over multiple devices and balance the workload within the GPUs.
  • B. Use compute-optimized EC2 instances with an attached elastic GPU.
  • C. Use general purpose GPU computing instances such as G3 and P3.
  • D. Use processing parallelism to partition the workload over multiple storage devices and balance the workload within the GPUs.
Show Suggested Answer Hide Answer
Suggested Answer: AC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
san2020
3 years, 7 months ago
my selection AC
upvoted 3 times
...
mattyb123
3 years, 8 months ago
Anyone got any further thoughts on this one?
upvoted 1 times
mattyb123
3 years, 8 months ago
Question is missing an answer which states Model Parallelism. Answer is Its Model Parallelism and the instance type G3 and P3.
upvoted 8 times
...
...
mattyb123
3 years, 8 months ago
answer is correct. https://aws.amazon.com/blogs/machine-learning/parallelizing-across-multiple-cpu-gpus-to-speed-up-deep-learning-inference-at-the-edge/
upvoted 1 times
mattyb123
3 years, 8 months ago
https://mxnet.incubator.apache.org/versions/master/faq/distributed_training.html
upvoted 1 times
mattyb123
3 years, 8 months ago
Data Parallelism vs Model Parallelism By default, MXNet uses data parallelism to partition the workload over multiple devices. Assume there are n devices. Then each one will receive a copy of the complete model and train it on 1/n of the data. The results such as gradients and updated model are communicated across these devices. MXNet also supports model parallelism. In this approach, each device holds onto only part of the model. This proves useful when the model is too large to fit onto a single device. Answer doesn't list Model Parallelism as that would be correct when using large models maybe this is a typo?
upvoted 2 times
freedomeox
3 years, 7 months ago
but data or model parallelism which is better is still a hot debate in ML, if both options appear then I really don't know which one is better...
upvoted 1 times
...
mattyb123
3 years, 8 months ago
https://aws.amazon.com/blogs/machine-learning/reducing-deep-learning-inference-cost-with-mxnet-and-amazon-elastic-inference/ Mentions increased performance with EI elastic GPUs on compute ec2 instances. However answer doesn't refer to Amazon Elastic Inference.
upvoted 1 times
jlpl
3 years, 8 months ago
@mattyb123: r u sitting in exam any time soon?
upvoted 1 times
mattyb123
3 years, 8 months ago
Yes, sitting it very soon again. Hence i want some feedback to talk through answers as this exam is hard compared to the CSA one.
upvoted 2 times
...
...
...
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...