exam questions

Exam DP-100 All Questions

View all questions & answers for the DP-100 exam

Exam DP-100 topic 5 question 24 discussion

Actual exam question from Microsoft's DP-100
Question #: 24
Topic #: 5
[All DP-100 Questions]

HOTSPOT -
A biomedical research company plans to enroll people in an experimental medical treatment trial.
You create and train a binary classification model to support selection and admission of patients to the trial. The model includes the following features: Age,
Gender, and Ethnicity.
The model returns different performance metrics for people from different ethnic groups.
You need to use Fairlearn to mitigate and minimize disparities for each category in the Ethnicity feature.
Which technique and constraint should you use? To answer, select the appropriate options in the answer area.
NOTE: Each correct selection is worth one point.
Hot Area:

Show Suggested Answer Hide Answer
Suggested Answer:
Box 1: Grid Search -
Fairlearn open-source package provides postprocessing and reduction unfairness mitigation algorithms: ExponentiatedGradient, GridSearch, and
ThresholdOptimizer.
Note: The Fairlearn open-source package provides postprocessing and reduction unfairness mitigation algorithms types:
✑ Reduction: These algorithms take a standard black-box machine learning estimator (e.g., a LightGBM model) and generate a set of retrained models using a sequence of re-weighted training datasets.
✑ Post-processing: These algorithms take an existing classifier and the sensitive feature as input.

Box 2: Demographic parity -
The Fairlearn open-source package supports the following types of parity constraints: Demographic parity, Equalized odds, Equal opportunity, and Bounded group loss.
Reference:
https://docs.microsoft.com/en-us/azure/machine-learning/concept-fairness-ml

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
phdykd
9 months, 1 week ago
ChatGPT Technique: a-Grid search Constraint: d-Demographic parity
upvoted 3 times
...
snegnik
11 months ago
I don't understand why not just throw out the "ethnicity" variable?
upvoted 4 times
...
Yuriy_Ch
1 year, 1 month ago
Exactly this question was on exam 07/March/2023
upvoted 4 times
...
phdykd
1 year, 2 months ago
To mitigate and minimize disparities for each category in the Ethnicity feature using Fairlearn, you should use the technique of "Grid search" and the constraint of "Demographic parity". Grid search is a technique used in Fairlearn to find the optimal combination of algorithmic choices and hyperparameters that minimize the difference in performance across subpopulations. This technique allows you to search through a range of potential models and select the one that achieves the best fairness-accuracy trade-off. Demographic parity is a constraint used in Fairlearn that aims to ensure that the predicted outcomes are statistically independent of the protected attribute (in this case, ethnicity). This means that the proportion of positive outcomes (admission to the trial) should be the same across all ethnic groups. Therefore, by using the Grid search technique to find the optimal model that satisfies the Demographic parity constraint, you can mitigate and minimize disparities for each category in the Ethnicity feature.
upvoted 2 times
...
fvil
1 year, 5 months ago
Appeared on exam 07/11/2022
upvoted 3 times
...
ning
1 year, 10 months ago
Grid search is good for sure ... However, Demographic parity: ensure that an equal number of positive predictions are made in each group False-positive rate parity: ensure that each group contains a comparable ratio of false-positive predictions So, which one is better ???
upvoted 1 times
ning
1 year, 10 months ago
This question might be wrongly worded, Grid Search is only good for binary feature, ethnicity is categorical so, it cannot be really used ...
upvoted 1 times
...
...
ranjsi01
2 years, 3 months ago
correct. https://docs.microsoft.com/en-us/learn/modules/detect-mitigate-unfairness-models-with-azure-machine-learning/4-mitigate-with-fairlearn
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago