You work for a small company that has deployed an ML model with autoscaling on Vertex AI to serve online predictions in a production environment. The current model receives about 20 prediction requests per hour with an average response time of one second. You have retrained the same model on a new batch of data, and now you are canary testing it, sending ~10% of production traffic to the new model. During this canary test, you notice that prediction requests for your new model are taking between 30 and 180 seconds to complete. What should you do?
sonicclasps
Highly Voted 1 year, 3 months agodesertlotus1211
Most Recent 1 month, 4 weeks agovini123
2 months, 3 weeks agopotomeek
3 months, 3 weeks agoYushiSato
8 months, 3 weeks agoYushiSato
8 months, 3 weeks agoAnnaR
1 year agopinimichele01
1 year agopinimichele01
1 year agoVipinSingla
1 year, 1 month agoAastha_Vashist
1 year, 1 month agorajshiv
5 months agoCarlose2108
1 year, 2 months agoguilhermebutzke
1 year, 2 months agovaibavi
1 year, 2 months agolunalongo
4 months, 3 weeks agob1a8fae
1 year, 3 months agokalle_balle
1 year, 3 months agoedoo
1 year, 1 month ago