exam questions

Exam 350-901 All Questions

View all questions & answers for the 350-901 exam

Exam 350-901 topic 1 question 315 discussion

Actual exam question from Cisco's 350-901
Question #: 315
Topic #: 1
[All 350-901 Questions]

A developer is designing a modern, distributed microservice enterprise application. The application will be integrating with other systems and focus on a large deployment, so control of API calls is necessary. What is the best practice to reduce application response latency and protect the application from excessive use?

  • A. Implement rate limiting on the client side.
  • B. Do not enforce any rate limiting.
  • C. Implement rate limiting on the client and server sides.
  • D. Implement rate limiting on the server side.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
razvan999
1 year, 1 month ago
I would say C, since the developer has only access to app which is distributed. Even though he cannot control how external users are using it, he can control the rate limiting form client/server perspective between app components
upvoted 1 times
...
doble_h
1 year, 3 months ago
Selected Answer: C
By implementing rate limiting on both sides, you strike a balance between controlling the application's behavior and protecting the application from external issues, ultimately reducing response latency and enhancing overall system reliability.
upvoted 1 times
...
kirrim
1 year, 3 months ago
We know there will need to be rate limiting on the server side. The main question here (and why so many of us are trying to pick between C and D) is whether rate limiting needs to be done on the client side as well. In the type of app being described, with various microservices interacting, it would be advantageous for the client microservice to not encounter unexpected behavior/errors from the server microservice it is attempting to access. If the server microservice has implemented rate limiting (or even worse, has NOT implemented it), the client app might have to sleep for a bit, or sit with a spinner, or encounter errors. Or, in the case of a massively-scaled malicious DDoS on a publicly-facing service, even handling all of the incoming API requests and responding with a 429 to every one can consume resources (and run up charges if it's a cloud-based API service). It's easier for everybody if both sides (server and client) have rate limiting built into their interactions with each other.
upvoted 3 times
...
kati2k22cz
1 year, 3 months ago
Selected Answer: C
i'll go with C Rate limit in both sides (client and server)
upvoted 1 times
...
Ietsmeteennaam
1 year, 7 months ago
Seeing this on page 11 of the DEVCOR studyguide: The front end (also called the “client side”) is everything a user sees and interacts with in a browser. The most common front-end languages are HTML, CSS, and JavaScript as well as various frameworks using these languages (React, Angular, and so on). The back end (also called the “server side”) of a website processes and stores data and ensures everything on the client side works correctly. It is the part of the website that does not come in direct contact with users. Back-end development languages are Java, C++, Python, PHP, and so on. So if client side is the front end, we would want to have some rate limiting there. Then the question becomes the following: should we rate limit API calls between the front end and back end? In my search on the question I stumbled apun this url: https://medium.com/@jonathansychan/rate-limiting-and-throttling-for-a-more-efficient-backend-7feb1a76acc8 So based on all this information, I would go with answer C.
upvoted 1 times
...
whipmuffin
1 year, 8 months ago
Selected Answer: D
I agree that the best answer is **D. Implement rate limiting on the server side**. Rate limiting is a technique to control the number of requests that a client can make to a server in a given time period. It can help reduce application response latency and protect the application from excessive use by preventing **overload**, **abuse**, or **denial-of-service attacks**. Rate limiting can be implemented on the client side or the server side, or both. However, rate limiting on the client side is not very effective or reliable, because it depends on the client's implementation and honesty. A malicious or faulty client can bypass or ignore the rate limit and send too many requests to the server. Rate limiting on the server side is more secure and consistent, because it enforces the limit at the source and rejects any requests that exceed the limit.
upvoted 2 times
...
Fedesarucho
1 year, 9 months ago
i think it is C
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...