During mini-batch training of a neural network for a classification problem, a Data Scientist notices that training accuracy oscillates. What is the MOST likely cause of this issue?
A.
The class distribution in the dataset is imbalanced.
Answer is D.
Should the weight be increased or reduced so that the error is smaller than the current value? You need to examine the amount of change to know that. Therefore, we differentiate and check whether the slope of the tangent is positive or negative, and update the weight value in the direction to reduce the error. The operation is repeated over and over so as to approach the optimal solution that is the goal. The width of the update amount is important at this time, and is determined by the learning rate.
D. The learning rate is very high.
Explanation:
When the learning rate is too high, the optimization process may overshoot the optimal weights in parameter space. Instead of gradually converging, the model updates weights in a highly unstable manner, causing fluctuations in training accuracy. The network fails to settle into a minimum because the updates are too aggressive.
A high learning rate can cause oscillations in the training accuracy because the optimizer makes large updates to the model parameters in each iteration, which can cause overshooting the optimal values. This can result in the model oscillating back and forth across the optimal solution.
If the learning rate is too high, the model weights may overshoot the optimal values and bounce back and forth around the minimum of the loss function. This can cause the training accuracy to oscillate and prevent the model from converging to a stable solution. The training accuracy is the proportion of correct predictions made by the model on the training data.
When the learning rate is set too high, it can lead to oscillations or divergence during training. Here's why:
High Learning Rate: A high learning rate means that the model's parameters are updated by a large amount in each training step. This can cause the model to overshoot the optimal parameter values, leading to instability in training.
Oscillations: If the learning rate is excessively high, the model's updates can become unstable, causing it to oscillate back and forth between parameter values. This oscillation can prevent the model from converging to an optimal solution.
To address this issue, you can try reducing the learning rate. It's often necessary to experiment with different learning rates to find the one that works best for your specific problem and dataset. Learning rate scheduling techniques, such as reducing the learning rate over time, can also help stabilize training.
Answer is A.
A high learning rate means that the model parameters are being updated by large magnitudes in each iteration. As a result, the optimization process may struggle to converge to the optimal solution, leading to erratic behavior and fluctuations in training accuracy.
D Learning rate is too high. Textbook example of learning rate being too high. Lower Learning_rate will take more iterations, or longer to train, but will settle in place.
A company sells thousands of products on a public website and wants to automatically identify
products with potential durability problems. The company has 1.000 reviews with date, star rating,
review text, review summary, and customer email fields, but many reviews are incomplete and
have empty fields. Each review has already been labeled with the correct durability result.
A machine learning specialist must train a model to identify reviews expressing concerns over
product durability. The first model needs to be trained and ready to review in 2 days.
What is the MOST direct approach to solve this problem within 2 days?
A.
Train a custom classifier by using Amazon Comprehend.
B.
Build a recurrent neural network (RNN) in Amazon SageMaker by using Gluon and Apache
MXNet.
C.
Train a built-in BlazingText model using Word2Vec mode in Amazon SageMaker.
D.
Use a built-in seq2seq model in Amazon SageMaker.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
gaku1016
Highly Voted 3 years, 7 months agoozan11
Highly Voted 3 years, 7 months agoJonSno
Most Recent 2 months, 2 weeks agoAjoseO
7 months, 1 week agoMickey321
7 months, 1 week agoRejju
7 months, 1 week agoCKS1210
1 year, 10 months agosoonmo
1 year, 11 months agoValcilio
2 years, 1 month agoTomatoteacher
2 years, 3 months agoShailendraa
2 years, 7 months agoSam1610
2 years, 10 months agomissionml
3 years, 1 month agomissionml
3 years, 1 month agobtsql
3 years, 6 months agojeetss1
3 years, 6 months agosyu31svc
3 years, 6 months agodeep_n
3 years, 6 months ago