You developed an ML model with AI Platform, and you want to move it to production. You serve a few thousand queries per second and are experiencing latency issues. Incoming requests are served by a load balancer that distributes them across multiple Kubeflow CPU-only pods running on Google Kubernetes Engine
(GKE). Your goal is to improve the serving latency without changing the underlying infrastructure. What should you do?
Y2Data
Highly Voted 3 years, 10 months agomousseUwU
3 years, 9 months agopico
Highly Voted 1 year, 8 months agodesertlotus1211
Most Recent 7 months, 1 week agorajshiv
8 months agoAB_C
8 months, 1 week agodesertlotus1211
9 months, 2 weeks agotaksan
11 months, 3 weeks agochirag2506
1 year, 1 month agoPhilipKoku
1 year, 2 months agopinimichele01
1 year, 3 months agoichbinnoah
1 year, 8 months agoedoo
1 year, 5 months agotavva_prudhvi
1 year, 12 months agoharithacML
2 years agoLiting
2 years agoM25
2 years, 2 months agoSergioRubiano
2 years, 4 months agoYajnas_arpohc
2 years, 4 months agofrangm23
2 years, 3 months ago