exam questions

Exam AI-100 All Questions

View all questions & answers for the AI-100 exam

Exam AI-100 topic 1 question 12 discussion

Actual exam question from Microsoft's AI-100
Question #: 12
Topic #: 1
[All AI-100 Questions]

You are designing an AI solution in Azure that will perform image classification.
You need to identify which processing platform will provide you with the ability to update the logic over time. The solution must have the lowest latency for inferencing without having to batch.
Which compute target should you identify?

  • A. graphics processing units (GPUs)
  • B. field-programmable gate arrays (FPGAs)
  • C. central processing units (CPUs)
  • D. application-specific integrated circuits (ASICs)
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️
FPGAs, such as those available on Azure, provide performance close to ASICs. They are also flexible and reconfigurable over time, to implement new logic.
Incorrect Answers:
D: ASICs are custom circuits, such as Google's TensorFlow Processor Units (TPU), provide the highest efficiency. They can't be reconfigured as your needs change.
References:
https://docs.microsoft.com/en-us/azure/machine-learning/service/concept-accelerate-with-fpgas

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
samok
Highly Voted 4 years, 11 months ago
FPGAs is correct because of the following line from the docs Piraat linked: "FPGAs make it possible to achieve low latency for real-time inference (or model scoring) requests. Asynchronous requests (batching) aren't needed. Batching can cause latency, because more data needs to be processed. Implementations of neural processing units don't require batching; therefore the latency can be many times lower, compared to CPU and GPU processors."
upvoted 6 times
...
Piraat
Highly Voted 5 years, 3 months ago
relevant: https://docs.microsoft.com/en-us/azure/machine-learning/how-to-deploy-fpga-web-service
upvoted 5 times
...
rveney
Most Recent 2 years ago
To ensure the ability to update the logic over time, while maintaining low latency for inferencing without the need for batching in an image classification AI solution, the compute target you should identify is C. central processing units (CPUs).
upvoted 1 times
...
Nova077
4 years, 9 months ago
https://www.aldec.com/en/company/blog/167--fpgas-vs-gpus-for-machine-learning-applications-which-one-is-better#:~:text=Efficiency%20and%20Power%3A%20FPGAs%20are,times%20better%20in%20power%20consumption.&text=This%20feature%20allows%20GPUs%20to%20be%20more%20power%20efficient%20than%20CPUs. This document suggests that FPGAs are mostly used where functional safety plays a very important role such as automation, avionics and defense. GPUs are originally designed for graphics and high-performance computing systems where safety is not a necessity. GPU is also made for Graphics. It's true that FPGAs are a bit more powerful than GPU, since its graphics and image classification, I wonder if the answer will be GPU
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...