HOTSPOT - For each of the following statements, select Yes if the statement is true. Otherwise, select No. NOTE: Each correct selection is worth one point. Hot Area:
Suggested Answer:
Box 1: Yes - Achieving transparency helps the team to understand the data and algorithms used to train the model, what transformation logic was applied to the data, the final model generated, and its associated assets. This information offers insights about how the model was created, which allows it to be reproduced in a transparent way.
Box 2: No - A data holder is obligated to protect the data in an AI system, and privacy and security are an integral part of this system. Personal needs to be secured, and it should be accessed in a way that doesn't compromise an individual's privacy.
Box 3: No - Inclusiveness mandates that AI should consider all human races and experiences, and inclusive design practices can help developers to understand and address potential barriers that could unintentionally exclude people. Where possible, speech-to-text, text-to-speech, and visual recognition technology should be used to empower people with hearing, visual, and other impairments. Reference: https://docs.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai
I can argue the answer to the 3rd question is "yes" as well. By having different pricing, Microsoft can make its services more accessible or affordable to all types of users, like poor/developed countries
1 Yes
2 No it should be related to "privacy and security" ( see e.g. https://azure.microsoft.com/en-us/services/bot-services/health-bot/#overview )
3 No it should be related to "fairness"
✅ Yes
Explanation: Providing an explanation for a credit loan decision is about transparency—helping users understand how the AI arrived at its decision.
✅ Yes
Explanation: A triage bot prioritizing based on injuries must function safely and reliably in a high-stakes context—this fits under reliability and safety. While some argue for other principles (like fairness), Microsoft categorizes such operational trust under reliability and safety.
❌ No
Explanation: An AI solution offered at different prices for different territories may be a business accessibility strategy, but inclusiveness in Responsible AI refers to making the AI system itself accessible and usable for people with diverse needs—not pricing structures. Unless the context explicitly mentions making AI more accessible to underserved populations, this is not clearly inclusiveness.
Second question is that "N" is correct.
In the emergency room, patients in critical condition are treated first. -> fairness. it is not equal situation.
or Accountability that can be in the insurance manual.
Answer YYN
Yes. This statement is correct. Providing an explanation for a decision made by an AI system, such as a credit loan application, aligns with the transparency principle, as it helps users understand how and why the decision was made.
Yes. This statement is correct. Prioritizing insurance claims based on injuries is a critical task, and ensuring the reliability and safety of the AI system in such cases is essential. It aligns with the reliability and safety principle.
No. This statement is not an example of the inclusiveness principle. Inclusiveness is more about avoiding bias, discrimination, and ensuring that AI benefits all individuals and groups fairly. Different pricing for different territories may raise concerns about fairness, but it doesn't directly relate to inclusiveness.
YNN is the answer.
https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai#transparency
Achieving transparency helps the team to understand the data and algorithms used to train the model, what transformation logic was applied to the data, the final model generated, and its associated assets. This information offers insights about how the model was created, which allows it to be reproduced in a transparent way. Snapshots within Azure Machine Learning workspaces support transparency by recording or retraining all training-related assets and metrics involved in the experiment.
i thought 2 should be yes, as some companies might wants to prioritize the claims based on injury so patients get benefit sooner or get treatment faster( depends country to country)
This section is not available anymore. Please use the main Exam Page.AI-900 Exam Questions
Log in to ExamTopics
Sign in:
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
Rezaphp
Highly Voted 3 years agojaci
3 years agosayurur
8 months agosdokmak
2 years, 10 months agosdokmak
2 years, 10 months agoprofesorklaus
2 years, 10 months agogs23mi
Highly Voted 3 years, 3 months agoTJ001
2 years, 12 months agoazuredemo2022three
Most Recent 2 months agokakarooky
4 months, 1 week agoamit_ax
4 months, 3 weeks agosaema
6 months agoARITRO
10 months, 1 week agokd333200
1 year, 2 months agozellck
1 year, 6 months agordemontis
1 year, 7 months agoFabianBigData
2 years, 4 months agoEltooth
2 years, 6 months agoDrChats
2 years, 7 months agoydu7312
2 years, 8 months agoBis_poh
2 years, 8 months agoHURRICANEDATA
2 years, 9 months agoSiDoCiOuS
2 years, 9 months ago