The Microsoft transparency principle for responsible AI requires that AI systems are understandable by users and operators. Therefore, the task that directly aligns with this principle is:
D. Ensure that a training dataset is representative of the population. This helps to ensure that the AI system’s decisions are fair and unbiased, which is a key aspect of transparency.
Option A, while important for accessibility, does not directly relate to the transparency of an AI system. Option B is more about the performance and scalability of the system, and option C, while important for maintainability, does not directly contribute to the transparency of the AI system from the user’s perspective. Therefore, the most appropriate answer is D.
Option A, Ensure that all visuals have an associated text that can be read by a screen reader directly supports transparency by making the system accessible and understandable to users with disabilities, ensuring they can interact with and comprehend the AI's outputs.
While D. Ensure that a training dataset is representative of the population is critical for fairness and avoiding bias (another responsible AI principle), it is less directly tied to transparency, which focuses on clear communication and user understanding. B and C address scalability and developer support, respectively, but are not directly related to transparency.
To ensure that the service meets the Microsoft transparency principle for responsible AI, you should include the task:
C. Provide documentation to help developers debug code.
Transparency in AI involves making the workings of AI systems understandable and accessible to users and developers. Providing documentation helps developers understand how the AI system operates, how to troubleshoot issues, and ensures that the system's behavior is clear and predictable.
I first thought D would be the right answer but C:
Transparency:
AI systems should be understandable. Users should be made fully aware of the purpose of the system, how it works, and what limitations may be expected.
I don't like the full answer but it has the main requirement of "understandable".
While options A, B, and C are important aspects of building an AI system, they do not directly contribute to the transparency of the AI system as defined by the principle. Option A is more related to accessibility, option B is about scalability, and option C is about providing support for developers, none of which directly ensure transparency in AI decision-making. Therefore, option D is the most appropriate choice to ensure the service meets the Microsoft transparency principle for responsible AI.
I do not agree with your explanation.
I think the option D isn't appropiate because is related with the inclusiveness principle.
So, the most "correct" answer would see the C option.
Among the options:
A. Ensure that all visuals have an associated text that can be read by a screen reader focuses on accessibility but doesn’t directly address transparency.
B. Enable autoscaling is related to system performance but not transparency.
C. Provide documentation to help developers debug code is essential for maintainability but doesn’t specifically enhance transparency.
D. Ensure that a training dataset is representative of the population is crucial for fairness and transparency.
Best Choice:
D. Ensure that a training dataset is representative of the population is directly tied to transparency.
A representative dataset helps avoid bias and ensures the model generalizes well to unseen examples.
C is the answer.
https://learn.microsoft.com/en-us/azure/cloud-adoption-framework/innovate/best-practices/trusted-ai#transparency
Achieving transparency helps the team to understand the data and algorithms used to train the model, what transformation logic was applied to the data, the final model generated, and its associated assets. This information offers insights about how the model was created, which allows it to be reproduced in a transparent way. Snapshots within Azure Machine Learning workspaces support transparency by recording or retraining all training-related assets and metrics involved in the experiment.
This section is not available anymore. Please use the main Exam Page.AI-900 Exam Questions
Log in to ExamTopics
Sign in:
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
dolphin_1923
Highly Voted 8 months, 1 week agoBenHung
Highly Voted 8 months, 1 week agoZaid0121
5 months agoparag_rs
Most Recent 2 weeks, 4 days agomlourh
1 month agoanirban7172
2 months agoGhada_Njimi
7 months, 2 weeks agoserget12
8 months, 1 week agoGenius365
1 year agoCLVASQUEZ
1 year, 1 month agoGsendra
1 year agoimadedakir
1 year, 3 months agoRJ1989
1 year, 9 months agoSoke
1 year, 10 months agoJohnnyBlaze
1 year, 11 months agoMr_Jaye
1 year, 11 months agozellck
1 year, 11 months agosowhkk
1 year, 11 months agoMalki00
1 year, 11 months ago