exam questions

Exam DP-100 All Questions

View all questions & answers for the DP-100 exam

Exam DP-100 topic 5 question 19 discussion

Actual exam question from Microsoft's DP-100
Question #: 19
Topic #: 5
[All DP-100 Questions]

You are a data scientist building a deep convolutional neural network (CNN) for image classification.
The CNN model you build shows signs of overfitting.
You need to reduce overfitting and converge the model to an optimal fit.
Which two actions should you perform? Each correct answer presents a complete solution.
NOTE: Each correct selection is worth one point.

  • A. Add an additional dense layer with 512 input units.
  • B. Add L1/L2 regularization.
  • C. Use training data augmentation.
  • D. Reduce the amount of training data.
  • E. Add an additional dense layer with 64 input units.
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
jsnels86
Highly Voted 5 years ago
I agree, B and C should be the correct answers
upvoted 25 times
...
Yilu
Highly Voted 5 years ago
adding more training records should decrease the overfitting.
upvoted 16 times
Yilu
5 years ago
Answer should be B and C
upvoted 47 times
kty
4 years, 2 months ago
I agree
upvoted 1 times
...
...
...
evangelist
Most Recent 11 months, 2 weeks ago
Selected Answer: BC
Explanation: B. L1/L2 regularization helps prevent overfitting by adding a penalty term to the loss function, discouraging the model from relying too heavily on any particular feature. C. Data augmentation increases the diversity of your training set by applying random (but realistic) transformations to the existing images, which helps the model generalize better and reduce overfitting. These two techniques are commonly used to address overfitting in deep learning models, especially CNNs for image classification.
upvoted 1 times
...
evangelist
1 year ago
B. Add L1/L2 regularization. C. Use training data augmentation. These methods directly address the problem of overfitting by either penalizing overly complex models or by making the training data more diverse and challenging for the model.
upvoted 1 times
...
Matt2000
1 year, 4 months ago
This reference might be useful: https://towardsdatascience.com/8-simple-techniques-to-prevent-overfitting-4d443da2ef7d
upvoted 1 times
...
phdykd
2 years, 3 months ago
The two actions that can help reduce overfitting and converge the model to an optimal fit are: B. Add L1/L2 regularization: Regularization techniques can help to reduce overfitting in a neural network. L1/L2 regularization adds a penalty term to the loss function, which encourages the model to learn simpler and smoother weight values. This, in turn, helps to prevent overfitting. C. Use training data augmentation: Data augmentation is a technique that can be used to artificially increase the size of the training dataset by creating new examples from existing data. This can help the model to generalize better and reduce overfitting. Common data augmentation techniques for image data include random rotations, flips, and translations. Options A and E suggest adding additional dense layers, which can increase the complexity of the model and potentially exacerbate overfitting. Option D suggests reducing the amount of training data, which can lead to underfitting and poor generalization performance. Therefore, options B and C are the best choices for reducing overfitting and improving model performance.
upvoted 2 times
...
ning
2 years, 12 months ago
Selected Answer: BC
Regulation Increase data through data argumentation
upvoted 2 times
...
dija123
3 years, 6 months ago
Selected Answer: BC
I agree with B and C
upvoted 5 times
...
saurabhk1
4 years, 3 months ago
Answer should be B and C
upvoted 7 times
...
Neuron
4 years, 4 months ago
Regularisation and data augmentation are correct. Dropouts and early termination are also correct but not in the options.
upvoted 4 times
...
aziti
4 years, 5 months ago
During dropout, we are not actually reducing the training data but rather dropping the neurons to help them memorize less and not overfit. https://www.kdnuggets.com/2019/12/5-techniques-prevent-overfitting-neural-networks.html so the answer is B and C
upvoted 4 times
...
Sud3962
4 years, 5 months ago
Answer should be B,C
upvoted 2 times
...
Pucha
4 years, 6 months ago
Yes BC should be correct, image data augmentation - generating more training data artificially to expand the learning of algo
upvoted 3 times
...
BICube
4 years, 8 months ago
and to support the argument for C: "Image data augmentation is a technique that can be used to artificially expand the size of a training dataset by creating modified versions of images in the dataset. Training deep learning neural network models on more data can result in more skillful models, and the augmentation techniques can create variations of the images that can improve the ability of the fit models to generalize what they have learned to new images." Ref: https://machinelearningmastery.com/how-to-configure-image-data-augmentation-when-training-deep-learning-neural-networks/
upvoted 5 times
...
hima618
4 years, 8 months ago
Yes, BC are correct.
upvoted 2 times
...
rr200
4 years, 10 months ago
BC are right answers. To reduce overfitting in DL model, you either increase training data volume or reduce complexity of the model
upvoted 4 times
...
Timeless_Faceless
4 years, 10 months ago
The answer is definitely B and C
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...