You recently developed a deep learning model using Keras, and now you are experimenting with different training strategies. First, you trained the model using a single GPU, but the training process was too slow. Next, you distributed the training across 4 GPUs using tf.distribute.MirroredStrategy (with no other changes), but you did not observe a decrease in training time. What should you do?
egdiaa
Highly Voted 2 years, 4 months agodesertlotus1211
Most Recent 2 months, 1 week agorajshiv
5 months agoAB_C
5 months, 1 week agopinimichele01
1 year agopico
1 year, 5 months agopico
1 year, 5 months agoPST21
1 year, 9 months agotavva_prudhvi
1 year, 5 months agoCloudKida
1 year, 12 months agoM25
1 year, 12 months agotavva_prudhvi
2 years, 1 month agoTNT87
2 years, 1 month agoJohn_Pongthorn
2 years, 2 months agozeic
2 years, 3 months agoares81
2 years, 4 months agoNayak8
2 years, 4 months agoMithunDesai
2 years, 4 months agohiromi
2 years, 4 months agohiromi
2 years, 4 months agohiromi
2 years, 4 months ago