Get Unlimited Contributor Access to the all ExamTopics Exams!
Take advantage of PDF Files for 1000+ Exams along with community discussions and pass IT Certification Exams Easily.
A development team at your company has created a dockerized HTTPS web application. You need to deploy the application on Google Kubernetes Engine (GKE) and make sure that the application scales automatically. How should you deploy to GKE?
A.
Use the Horizontal Pod Autoscaler and enable cluster autoscaling. Use an Ingress resource to load-balance the HTTPS traffic.
B.
Use the Horizontal Pod Autoscaler and enable cluster autoscaling on the Kubernetes cluster. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
C.
Enable autoscaling on the Compute Engine instance group. Use an Ingress resource to load-balance the HTTPS traffic.
D.
Enable autoscaling on the Compute Engine instance group. Use a Service resource of type LoadBalancer to load-balance the HTTPS traffic.
"Ingress is a Kubernetes resource that encapsulates a collection of rules and configuration for routing external HTTP(S) traffic to internal services.
On GKE, Ingress is implemented using Cloud Load Balancing. When you create an Ingress in your cluster, GKE creates an HTTP(S) load balancer and configures it to route traffic to your application."
Are you exposing multiple services through single IP address? Hence, do you need routing your traffic?
Correct answer is B.
service loadBalancer: https://cloud.google.com/kubernetes-engine/docs/concepts/service-load-balancer
This page provides a general overview of how Google Kubernetes Engine (GKE) creates and manages Google Cloud load balancers when you apply a Kubernetes LoadBalancer Services manifest. It describes the different types of load balancers and how settings like the externalTrafficPolicy and GKE subsetting for L4 internal load balancers determine how the load balancers are configured. -> l4 tcp/udp not https
Ingress: https://cloud.google.com/kubernetes-engine/docs/concepts/ingress This page provides a general overview of what Ingress for external Application Load Balancers is and how it works. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers for HTTP(S) workloads in GKE. -S http(s)
I'm assuming B is the suggested answer because a the question doesn't state that the application should be available externally. Services allow exposing resources internally and to load balancers.
However, it should be A, as the assumption would be a an external web application.
https://cloud.google.com/kubernetes-engine/docs/concepts/service
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
"This page provides a general overview of what Ingress for external Application Load Balancers is and how it works. Google Kubernetes Engine (GKE) provides a built-in and managed Ingress controller called GKE Ingress. This controller implements Ingress resources as Google Cloud load balancers for HTTP(S) workloads in GKE."
https://cloud.google.com/kubernetes-engine/docs/concepts/ingress
As there is no mention about the type of the traffic, Internal or external - Going with A - Ingress.
Option-C and D are straightforwardly wrong
Between A and B : B is the correct answer, because it makes use of loadbalancing the ingress in K8S native style. That is the reason why cluster scaling is also done.
This is how it should
External Load Balancing Ingress --> K8S Service of type LoadBalancer --> pods that can autoscale
Directly allowing external loadbalcing ingress to autoscaled Pod, doesn't makes sense to use GKE
Ingress is Https while Service is TCP/UDP.
https://cloud.google.com/load-balancing/docs/choosing-load-balancer
https://cloud.google.com/kubernetes-engine/docs/concepts/service-networking
Bowth options A and B can satisfy the requirements, they are both based on a load balancer. Option A is more adapted and more flexible as later on, you can set up routing rules to expose more then just one service using the same loadbalancer which can help reduce cost, you don't really need that flexibity for this case, but since it's gonna cost the same thing for now (const of a loadbalancer). Its better to go with the ingress option.
I'm going to go with B just because horizontal scaling has a ceiling and you do need to enable cluster scaling if you actually need a new node. However, I have no idea if this is the right answer. I also favor on using a load balancer vs an ingress resource as the GKE quickstart talks about it and does not mention ingress resource at all.
Option A is incorrect because although it mentions using the Horizontal Pod Autoscaler and enabling cluster autoscaling, it doesn't specify how to expose the application to the internet using a LoadBalancer.
Correct Answer is B
Men, this question, many people saying A. Ingress is not a load balancer, ingress in kubernetes only route the data to a service https://kubernetes.io/docs/concepts/services-networking/ingress/ . A is incorrect because says, use ingress to loadbalance, that is not correct. B is correct.
A voting comment increases the vote count for the chosen answer by one.
Upvoting a comment with a selected answer will also increase the vote count towards that answer by one.
So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.
crypt0
Highly Voted 4 years, 6 months agotechalik
3 years, 4 months agonitinz
3 years, 1 month agoSmart
4 years, 2 months agoSmart
4 years, 2 months agotartar
3 years, 8 months agoGopiSivanathan
3 years, 6 months agojcmoranp
Highly Voted 4 years, 6 months agoPime13
Most Recent 2 months, 3 weeks agogun123
3 months, 2 weeks agobandegg
3 months, 3 weeks agoMahAli
4 months, 2 weeks agoAwsSuperTrooper
5 months agothewalker
5 months, 2 weeks agoArun_m_123
6 months, 2 weeks agosomeone2011
6 months, 3 weeks agoheretolearnazure
8 months agowillyf1
8 months, 1 week agorusll
8 months, 2 weeks agomaxter55
8 months, 2 weeks agoManishKS
8 months, 4 weeks agoALIZABAL
9 months agoPau123
9 months, 1 week ago