You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
egdiaa
Highly Voted 2 years, 6 months agodesertlotus1211
Most Recent 4 months, 1 week agorajshiv
7 months agoAB_C
7 months, 1 week agopinimichele01
1 year, 2 months agopico
1 year, 7 months agopico
1 year, 7 months agoPST21
1 year, 11 months agotavva_prudhvi
1 year, 7 months agoCloudKida
2 years, 1 month agoM25
2 years, 1 month agotavva_prudhvi
2 years, 3 months agoTNT87
2 years, 3 months agoJohn_Pongthorn
2 years, 4 months agozeic
2 years, 6 months agoares81
2 years, 6 months agoNayak8
2 years, 6 months agoMithunDesai
2 years, 6 months agohiromi
2 years, 6 months agohiromi
2 years, 6 months agohiromi
2 years, 6 months ago