exam questions

Exam Professional Cloud Network Engineer All Questions

View all questions & answers for the Professional Cloud Network Engineer exam

Exam Professional Cloud Network Engineer topic 1 question 25 discussion

Actual exam question from Google's Professional Cloud Network Engineer
Question #: 25
Topic #: 1
[All Professional Cloud Network Engineer Questions]

You have a web application that is currently hosted in the us-central1 region. Users experience high latency when traveling in Asia. You've configured a network load balancer, but users have not experienced a performance improvement. You want to decrease the latency.
What should you do?

  • A. Configure a policy-based route rule to prioritize the traffic.
  • B. Configure an HTTP load balancer, and direct the traffic to it.
  • C. Configure Dynamic Routing for the subnet hosting the application.
  • D. Configure the TTL for the DNS zone to decrease the time between updates.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Barry123456
Highly Voted 2 years, 5 months ago
All of these answers stink. How would a load balancer decrease latency to your application? Latency and distance are related and none of these are decreasing either. B is the best of the worst.
upvoted 8 times
badrik
2 years, 2 months ago
you have to think from the aspect that Network load balancer is regional and Http load balancer is global. Thus ultimately reducing the latency for end users coming in from different region
upvoted 7 times
...
...
saraali
Most Recent 2 months, 2 weeks ago
Selected Answer: B
The correct option is B.An HTTP(S) load balancer provides global load balancing, which can direct user traffic to the closest backend, based on the user's geographic location. Since users in Asia are experiencing high latency when the application is hosted in us-central1, configuring an HTTP(S) load balancer will allow the system to route traffic to the nearest available backend (such as one located in an Asian region), significantly reducing the latency. In contrast, a Network Load Balancer operates at the TCP/UDP level and does not optimize for geographic location, leading to high latency for users far from the us-central1 region.
upvoted 1 times
...
xhilmi
10 months, 3 weeks ago
Selected Answer: B
To decrease latency for users in Asia accessing a web application hosted in the us-central1 region, one effective strategy is to leverage content delivery networks (CDNs) that have edge locations in Asia. This helps serve content from a location closer to the users, reducing latency. Therefore, the most suitable option is: B. Configure an HTTP load balancer, and direct the traffic to it.
upvoted 1 times
...
i_0_i
1 year, 1 month ago
Selected Answer: B
Cloud CDN works with the global external Application Load Balancer or the classic Application Load Balancer to deliver content to your users https://cloud.google.com/cdn/docs/overview
upvoted 2 times
...
pk349
1 year, 9 months ago
B: : Dynamic Routing does not work because the application is ONLY in one us-central1 region. Network Load Balancing With a network load balancer, user requests still enter the Google network at the closest edge PoP (in Premium Tier). In the region where the project's VMs are located, traffic flows first through a network load balancer.
upvoted 1 times
...
conip
1 year, 10 months ago
Selected Answer: B
NLB - just pass through - the same syn, syn-ack, ack ... GLB (http) - proxy - keeps conn open to backend assuming its premium tier both use closes POP to enter - so no regional aspect is needed to consider https://cloud.google.com/load-balancing/docs/tutorials/optimize-app-latency#network-load-balancing
upvoted 1 times
...
DA_007
1 year, 11 months ago
The question is asking how to route the packets from Asia to US by leveraging Google's private network instead of Internet. Hence to reduce latency. B is correct as HTTP LB - backends can be in any region and any VPC network (Premium tier). On the other hand, Network LB - the backend service must also be in the same region and VPC network as the forwarding rule.
upvoted 1 times
...
GCP72
2 years, 2 months ago
Selected Answer: B
The correct answer is "B"
upvoted 1 times
...
kumarp6
2 years, 10 months ago
Answer is : B
upvoted 2 times
...
Vidyasagar
3 years, 7 months ago
B is the one
upvoted 4 times
...
eeghai7thioyaiR4
3 years, 8 months ago
An HTTP load balancer may help a bit While the speed of light will be unchanged (US <-> asia is a long trip), users will connect to the http load balancer An tcp connection uses a 3 way handshake, so additionnal roundtrip are required But http load balancers uses keepalived, so connections to the origin are kept across requests So, instead of cust <-> US (2 long RTT), you get cust <-> asia (2 small RTT) + asia <-> US (1 long RTT)
upvoted 2 times
...
densnoigaskogen
3 years, 9 months ago
Answer is B. Network LB is regional service. This scenario requires global scale type of LB, thus HTTP LB is the correct choice.
upvoted 3 times
...
Gharet
3 years, 10 months ago
B is correct
upvoted 1 times
...
[Removed]
3 years, 11 months ago
Ans - B
upvoted 1 times
...
saurabh1805
4 years, 2 months ago
B is correct answer here.
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago