exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 33 discussion

A company is hosting a monolithic REST-based API for a mobile app on five Amazon EC2 instances in public subnets of a VPC. Mobile clients connect to the API by using a domain name that is hosted on Amazon Route 53. The company has created a Route 53 multivalue answer routing policy with the IP addresses of all the EC2 instances. Recently, the app has been overwhelmed by large and sudden increases to traffic. The app has not been able to keep up with the traffic.
A solutions architect needs to implement a solution so that the app can handle the new and varying load.
Which solution will meet these requirements with the LEAST operational overhead?

  • A. Separate the API into individual AWS Lambda functions. Configure an Amazon API Gateway REST API with Lambda integration for the backend. Update the Route 53 record to point to the API Gateway API.
  • B. Containerize the API logic. Create an Amazon Elastic Kubernetes Service (Amazon EKS) cluster. Run the containers in the cluster by using Amazon EC2. Create a Kubernetes ingress. Update the Route 53 record to point to the Kubernetes ingress.
  • C. Create an Auto Scaling group. Place all the EC2 instances in the Auto Scaling group. Configure the Auto Scaling group to perform scaling actions that are based on CPU utilization. Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record.
  • D. Create an Application Load Balancer (ALB) in front of the API. Move the EC2 instances to private subnets in the VPC. Add the EC2 instances as targets for the ALB. Update the Route 53 record to point to the ALB.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
EricZhang
Highly Voted 2 years, 6 months ago
Selected Answer: A
Serverless requires least operational effort.
upvoted 37 times
dqwsmwwvtgxwkvgcvc
1 year, 10 months ago
I guess multivalue answer routing in Route53 is not proper load balancing so replacing multivalue answer routing with ALB would proper balance the load (with minimal effort)
upvoted 4 times
...
lkyixoayffasdrlaqd
2 years, 4 months ago
How can this be the answer ?? It says: Separate the API into individual AWS Lambda functions. Can you calculate the operational overhead to do that?
upvoted 21 times
scuzzy2010
2 years, 2 months ago
Separating would be development overhead, but once done, the operational overheard (operational = ongoing day-to-day) will be the least.
upvoted 13 times
24Gel
1 year, 3 months ago
disagree, ASG in Option D, after set up, operational is not overheat as well
upvoted 1 times
24Gel
1 year, 3 months ago
i mean Option C not D
upvoted 1 times
24Gel
1 year, 3 months ago
never mind, A is simpler than C
upvoted 2 times
...
...
...
...
...
Jay_2pt0_1
2 years, 1 month ago
From any type of real-world perspective, this just can't be the answer IMHO. Surely AWS takes "real world" into account.
upvoted 1 times
...
...
jooncco
Highly Voted 2 years, 5 months ago
Selected Answer: C
Suppose there are a 100 REST APIs (Since this application is monolithic, it's quite common). Are you still going to copy and paste all those API codes into lambda? What if business logic changes? This is not MINIMAL. I would go with C.
upvoted 33 times
altonh
5 months, 3 weeks ago
Option C means your R53 is playing catch-up with your ASG. What happens if you scale down? Your clients will still have the terminated EC2 in their cache until the next TTL.
upvoted 1 times
...
chathur
2 years, 1 month ago
"Create an AWS Lambda function that reacts to Auto Scaling group changes and updates the Route 53 record. " This does not make any sense, why do you need to change R53 records using a Lambda?
upvoted 1 times
Vesla
1 year, 10 months ago
Because if you have 4 ec2 in your ASG you need to have 4 records in domain name if ASG scale up to 6 for example you need 2 add 2 records more in domain name
upvoted 4 times
liquen14
1 year, 4 months ago
Too contrived in my opinion, and what about DNS caches in the clients?. You coul get stuck for a while with the previous list of servers. I think it's has to be A (but it would involve a considerable development effort) or D which is extremely easy to implement but and the same time it sounds a little bit fishy because they don't mention anything about ASG or scaling I hate this kind of questions and I don't understand what kind of useful insight they provide unless they want us to become masters of the art of dealing with ambiguity
upvoted 3 times
cnethers
1 year ago
Agree that D does not scale to meet demand, it's just a better way to load balance which was being done at R53 before so the scaling issue has not been resolved. Also agree A requires more dev effort and less ops effort, so I would have to lean to A... Answer selection is poor IMO
upvoted 1 times
...
...
...
...
scuzzy2010
2 years, 4 months ago
It says "a monolithic REST-based API " - hence only 1 API. Initially I thought C, but I'll go with A as it says least operation overhead (not least implementation effort). Lambda has virtually no operation overhead compared to EC2.
upvoted 8 times
aviathor
1 year, 12 months ago
Answer A says "Separate the API into individual AWS Lambda functions." Makes me think there may be many APIs. However, we are looking to minimize operational effort, not development effort...
upvoted 1 times
...
Jay_2pt0_1
2 years, 2 months ago
A monolithic REST api likely has a gazillion individual APIs. This refactor would not be a small one.
upvoted 5 times
...
...
jainparag1
1 year, 7 months ago
Dealing with business logic change is applicable to existing solution or any solution based on the complexity. Rather it's easier to deal when these are microservices. You shouldn't hesitate to refactor your application by putting one time effort (dev overhead) to save significant operational overhead on daily basis. AWS is pushing for serverless only for this.
upvoted 1 times
...
...
12db8b7
Most Recent 1 week ago
Selected Answer: D
I believe the issues the company is experiencing are directly related to the use of the Route 53 Multi-Value Answer routing policy without a proper load balancer. This approach relies on DNS to distribute traffic, which can result in uneven load distribution across EC2 instances. Since clients may randomly select the first IP returned by DNS, some instances might get overwhelmed while others remain underutilized. In contrast, using an Application Load Balancer (ALB) alone but recomended with an Auto Scaling Group provides real-time traffic distribution, automatic health checks, and the ability to scale out based on demand. All with minimal operational overhead. For this use case, where the application is monolithic and the main concern is handling unpredictable traffic surges, Option D (ALB) is the most reliable and scalable solution.
upvoted 1 times
...
sergza888
2 weeks, 2 days ago
Selected Answer: D
Lambda's refactorings, especially Labmdas that update DNS or EKS Ingress "Envoy" provisioning add to operational complexities, D it is quite simple and easy
upvoted 1 times
12db8b7
1 week ago
But u still don't upgrade the ability to scale with only a ALB, so I would go with C
upvoted 1 times
...
...
Monsterpuss
3 weeks, 3 days ago
Selected Answer: C
My preference would be to go for option D as an ALB is a more elegant solution, but without an ASG behind it, there would still be problems. C is less elegant, but has the advantage of an ASG, even if the R53 update mechanism is messy.
upvoted 1 times
...
Kaps443
3 weeks, 3 days ago
Selected Answer: D
ALB + Private EC2 + Route 53 → ALB Perfect for immediate scaling needs, security, and minimal disruption. A is incorrect Requires completely re-architecting the monolithic API into functions — high development effort Not the least operational overhead initially (only pays off long term)
upvoted 1 times
...
senlogan
2 months, 1 week ago
Selected Answer: A
Right answer would be ALB+Autoscaling. Go with A because the question is asking least operation overheard not least effort.
upvoted 2 times
...
abdullahelwalid
3 months, 1 week ago
Selected Answer: D
Option A is not the answer because if the application is migrated to lambda then code refactoring is required which will require operational overhead, while option D the architecture remains the same but we evenly distribute the traffic by adding ALB then assigning the EC2s to a target group therefore the load will be evenly balanced. Route 53 gets updated pointing to the ALB
upvoted 1 times
...
ParamD
3 months, 2 weeks ago
Selected Answer: C
D. doesn’t have auto scaling. B. EKS will add operational overhead A. Adds lots of lambda functions whose maintenance and management will add to operational overhead compared to current monolithic setup. C. Is the best fit of the available options, it will enable autoscaling and will allow upto 8 nodes from current 5, one lambda function to update route53 will add minimal operational overhead. Though D with Autoscaling would have allowed minimal operational overhead and more flexibility to scale.
upvoted 1 times
...
soulation
4 months ago
Selected Answer: C
Less operational overhead. Much less development effort.
upvoted 1 times
...
SaqibTaqi
4 months, 3 weeks ago
Selected Answer: A
well... i have to say... none of the options here comply to least operational overhead... each and every option involves changing the application logic.. but foe the sake of it... A is the best answer.. It cannot be B as containerizing would not be suitable to use with IP addresses of the instances... ASG and ELB would not fit here as Route 53 records point to the static IP addresses of the instances.. so the best answer is A... But again... a lot of overhead invovled if someone goes on for implementation...
upvoted 1 times
...
sintesi_suffisso0
5 months, 1 week ago
Selected Answer: D
It can’t be A since we don’t know how much time the API needs to complete
upvoted 2 times
...
Shanmahi
5 months, 3 weeks ago
Selected Answer: D
While all 4 options work well and general inclination is to go for "serverless", the least operational effort is certainly add an ALB to distribute the incoming traffic on the EC2 instances. In a "real-world" scenario, I would ideally place Route53 -> ALB -> EC2 instances in an ASG. However, in the given option choices, D with ALB meets the requirement well from operational complexity point of view.
upvoted 3 times
...
jerry00218
6 months ago
Selected Answer: A
Serverless is the least operational effort
upvoted 1 times
...
thanhpolimi
6 months ago
Selected Answer: D
D provides a balanced solution to handle increased and varying traffic loads while minimizing the complexity and maintenance overhead.
upvoted 2 times
...
grumpysloth
6 months, 2 weeks ago
Selected Answer: C
Operational overhead to fix the scalabiltiy issue is minimal if we keep the EC2 instances as they are and use ASG. We know nothing about the code complexity or response time, it might be hours, so Lambda is nto an option IMHO. D is not an option because it doesn't include autoscaling, so it won't solve the issue.
upvoted 3 times
...
JOJO9
6 months, 3 weeks ago
Selected Answer: D
This approach leverages AWS managed services like the Application Load Balancer (ALB) and Auto Scaling groups, minimizing the operational overhead required to handle varying traffic loads. The ALB automatically distributes incoming traffic across the EC2 instances, while the instances can be placed in private subnets for better security. Additionally, the Auto Scaling group can be configured to automatically scale the EC2 instances based on metrics like CPU utilization, eliminating the need for manual scaling. By using these managed services, you can offload tasks like load balancing, health checks, and auto-scaling to AWS, reducing the operational burden on your team. Updating the Route 53 record to point to the ALB's DNS name ensures that traffic is seamlessly routed to the backend instances without the need for manual DNS updates or additional components like Lambda functions.
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...