exam questions

Exam 350-901 All Questions

View all questions & answers for the 350-901 exam

Exam 350-901 topic 1 question 14 discussion

Actual exam question from Cisco's 350-901
Question #: 14
Topic #: 1
[All 350-901 Questions]

Where should distributed load balancing occur in a horizontally scalable architecture?

  • A. firewall-side/policy load balancing
  • B. network-side/central load balancing
  • C. service-side/remote load balancing
  • D. client-side/local load balancing
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
fechao
Highly Voted 3 years, 3 months ago
I choose D
upvoted 7 times
...
examtopicstroilevw
Most Recent 1 month, 3 weeks ago
Selected Answer: D
client-side appears more distributed than the other choices, as each client can make the decision. feels like more of a 'dictionary' test with questions like this.
upvoted 1 times
...
tartarus23
5 months, 1 week ago
Selected Answer: D
The correct answer is: D. client-side/local load balancing Explanation: In a horizontally scalable architecture, distributed load balancing should ideally occur on the client side. This is because it allows for the direct routing of traffic to various servers, thus ensuring a more even distribution of load
upvoted 2 times
...
designated
1 year, 3 months ago
Selected Answer: D
D is correct. > The front-end/back-end model/concept is widely used in web development. The front end (also called the “client side”) is everything a user sees and interacts with in a browser. The back end (also called the “server side”) of a website processes and stores data and ensures everything on the client side works correctly. To further reduce latency, modern web architectures use client-side processes and move away from doing everything on a server side. With this model, the server provides a “raw” code that implements some application logic and renders a web page into its final form locally within a browser. It allows the creation of dynamic web pages, where the view changes based on user input and events (for example, hovering a mouse over a thumbnail brings up a full-sized image) without any interaction with a server, resulting in a much better user experience. > The load balance could be done by using local ADC (Application Delivery Controllers) or geographically by using GSLB (Global Server Load Balancer) with DNS queries or CDN caches.
upvoted 3 times
...
dhie
2 years, 2 months ago
Distributed Load Balancing In distributed load balancing, there are no central load balancers, each client that requires some service uses that service via locally installed reverse-proxy. Reverse-proxy, take cares of the load-balancing, so it is a client-side load-balancing. Every time, a client makes a request, based on the load-balancing strategy, reverse-proxy, distributes the request to the attached resources.
upvoted 1 times
...
Npirate
2 years, 3 months ago
I think all these questions need so much more intel. Are the system virtualized are they using overlay technologies all impacting design decisions. But from the infromation they provide you here if it's baremetal systems I would say either B or D. Where D is the most likely as indeed stated distributed loadbalancing that is easiest achieved with applications like ngnix. though F5 can do this with GTM and LTM solutions where a GTM distributes loads cross region and dc's and the LTM over multiple end compute nodes. This is hard to achieve with Ngnix nodes.
upvoted 1 times
...
blezzzo
3 years, 2 months ago
I'll go with D too Explanation: The term horizontally scalable refers to systems whose capacity and throughput are increased by adding additional nodes. This is in distinction to vertically scaled systems, where adding capacity and throughput generally involves replacing smaller nodes with larger and more powerful ones. Nevertheless, horizontal scalability brings a new problem, if you have 10 service doing the same job, which one to connect, to put simply how to distribute the traffic? Solution to this, of course, distributing incoming traffic to the pool of resources or servers by load balancing. In distributed load balancing, there are no central load balancers, each client that requires some service uses that service via locally installed reverse-proxy. Reverse-proxy is always up-to-date with existing services, meaning when a new service is being provisioned, reverse-proxy is configuration and it is updated. Reverse-proxy, take cares of the load-balancing, so it is a client-side load-balancing. Every time, a client makes a request, based on the load-balancing strategy, reverse-proxy, distributes the request to the attached resources.
upvoted 4 times
anonymousch
3 years ago
Well, with horizontal scaling, you can have multiple solutions: - A client-side load balancer as you wrote above. - A service-side load balancer as kubernetes metallb - A firewall-side or network-side load balancer as F5 for example.
upvoted 3 times
Nizrim
2 years, 8 months ago
All provided types are examples of horizontal scale of the system. But question asks specifically about "distributed" load balancing. So D is the right call here.
upvoted 3 times
...
...
...
FR99
3 years, 2 months ago
I go with 'D. client-side/local load balancing' https://enginyoyen.com/distributed-load-balancing/ section 'Distributed Load Balancing'
upvoted 3 times
...
rollercoaster785
3 years, 3 months ago
Isn't C the correct answer?
upvoted 1 times
rollercoaster785
3 years, 3 months ago
Sorry let me withdraw my comment
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago