Welcome to ExamTopics
ExamTopics Logo
- Expert Verified, Online, Free.

Unlimited Access

Get Unlimited Contributor Access to the all ExamTopics Exams!
Take advantage of PDF Files for 1000+ Exams along with community discussions and pass IT Certification Exams Easily.

Exam Professional Cloud DevOps Engineer topic 1 question 44 discussion

Actual exam question from Google's Professional Cloud DevOps Engineer
Question #: 44
Topic #: 1
[All Professional Cloud DevOps Engineer Questions]

Your team has recently deployed an NGINX-based application into Google Kubernetes Engine (GKE) and has exposed it to the public via an HTTP Google Cloud
Load Balancer (GCLB) ingress. You want to scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI). What should you do?

  • A. Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness probes.
  • B. Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand.
  • C. Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB.
  • D. Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?) , you can switch to a simple comment.
Switch to a voting comment New
Charun
Highly Voted 2 years, 10 months ago
Option C
upvoted 13 times
...
kubosuke
Highly Voted 2 years, 9 months ago
C is correct A. Configure the horizontal pod autoscaler to use the average response time from the Liveness and Readiness Probes. --> using health check as a trigger of scaling is weird. if the response time of the health check is delayed, it may be caused by resources issues such as CPU, memories, and so on. so you should use such values as SLIs. B. Configure the vertical pod autoscaler in GKE and enable the cluster autoscaler to scale the cluster as pods expand. --> it doesn't referred to pod autoscaling. D. Expose the NGINX stats endpoint and configure the horizontal pod autoscaler to use the request metrics exposed by the NGINX deployment. --> if you use request metrics as SLIs, you should use custom metrics as SLIs. it is a little bit redundant.
upvoted 12 times
...
jomonkp
Most Recent 5 months, 2 weeks ago
Selected Answer: C
option C
upvoted 1 times
...
Wael216
1 year, 2 months ago
Selected Answer: C
C. Install the Stackdriver custom metrics adapter and configure a horizontal pod autoscaler to use the number of requests provided by the GCLB. To scale the deployment of the application's frontend using an appropriate Service Level Indicator (SLI), we need to monitor the traffic coming to the application. One way to do this is to install the Stackdriver custom metrics adapter, which provides visibility into GCLB metrics such as request counts, bytes sent and received, and active connections. We can then configure a horizontal pod autoscaler (HPA) to scale the number of pods based on the request count coming through the GCLB, which will help to ensure that our application is always available to handle the incoming traffic.
upvoted 3 times
...
zellck
1 year, 6 months ago
Selected Answer: C
C is the answer.
upvoted 1 times
...
pradoUA
1 year, 6 months ago
Selected Answer: C
I will go with C
upvoted 1 times
...
ramzez4815
1 year, 7 months ago
Selected Answer: C
C is the correct answer as per Google documentation
upvoted 2 times
...
khushboo93s
1 year, 11 months ago
C is correct
upvoted 1 times
...
Murty549
2 years, 4 months ago
Selected Answer: C
Option B is incorrect because there will be no benefit in vertically scaling the front end if the requests are very high. Because in that case the network is the bottleneck and not the instance resources. In my opinion the Answer is C.
upvoted 2 times
...
cloudbee
2 years, 4 months ago
c looks more feasible to me. but also B is in small favour. Horizontal pod autoscaler will scale based on the custom metrics such as requests per second(i.e. no. of requests). but vertical autoscaling is also useful feature for the frontend application. as it requires more resource needs if the traffic is high. Since vertical autoscaling will first delete the pod and adjust the cpu and memory to recreate the pod, It can cause downtime for that duration and not recommended. Hence, More inclined towards answer C. Still, reply my answer with your explanation please https://cloud.google.com/kubernetes-engine/docs/concepts/horizontalpodautoscaler#overview
upvoted 1 times
...
Shasha1
2 years, 5 months ago
B is correct front-end web applications Scale based on the number of incoming request. so need vertical scaling Back-end Batch Processing (Scale Horizontally) reference https://docs.rightscale.com/faq/What_is_auto-scaling.html
upvoted 3 times
...
TNT87
2 years, 8 months ago
C https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics
upvoted 2 times
...
IamSuren
2 years, 9 months ago
According to Google https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics#cpu_1 Horizontal Pod Autoscalers can scale based on CPU utilization natively, so the Custom Metrics Adapter is not needed, therefor C doesn't fit .
upvoted 2 times
[Removed]
2 years, 8 months ago
It is why the C answer say number of requests not CPU utilization.
upvoted 5 times
...
...
ralf_cc
2 years, 11 months ago
C - https://cloud.google.com/kubernetes-engine/docs/tutorials/autoscaling-metrics You want to scale horizontally
upvoted 4 times
...
rinkeshgala1
2 years, 11 months ago
Option B
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...