exam questions

Exam AWS DevOps Engineer Professional All Questions

View all questions & answers for the AWS DevOps Engineer Professional exam

Exam AWS DevOps Engineer Professional topic 1 question 73 discussion

Exam question from Amazon's AWS DevOps Engineer Professional
Question #: 73
Topic #: 1
[All AWS DevOps Engineer Professional Questions]

An Engineering team manages a Node.js e-commerce application. The current environment consists of the following components:
✑ Amazon S3 buckets for storing content
✑ Amazon EC2 for the front-end web servers
✑ AWS Lambda for image processing
✑ Amazon DynamoDB for storing session-related data
The team expects a significant increase in traffic to the site. The application should handle the additional load without interruption. The team ran initial tests by adding new servers to the EC2 front-end to handle the larger load, but the instances took up to 20 minutes to become fully configured. The team wants to reduce this configuration time.
What changes will the Engineering team need to implement to make the solution the MOST resilient and highly available while meeting the expected increase in demand?

  • A. Use AWS OpsWorks to automatically configure each new EC2 instance as it is launched. Configure the EC2 instances by using an Auto Scaling group behind an Application Load Balancer across multiple Availability Zones. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the Application Load Balancer.
  • B. Deploy a fleet of EC2 instances, doubling the current capacity, and place them behind an Application Load Balancer. Increase the Amazon DynamoDB read and write capacity units. Add an alias record that contains the Application Load Balancer endpoint to the existing Amazon Route 53 DNS record that points to the application.
  • C. Configure Amazon CloudFront and have its origin point to Amazon S3 to host the web application. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the CloudFront DNS name.
  • D. Use AWS Elastic Beanstalk with a custom AMI including all web components. Deploy the platform by using an Auto Scaling group behind an Application Load Balancer across multiple Availability Zones. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the Elastic Beanstalk load balancer.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
agomes
Highly Voted 3 years, 7 months ago
In this case would choose D, because application is executed nodejs | I don't know how S3 would help in this case.
upvoted 17 times
...
YashBindlish
Highly Voted 3 years, 7 months ago
Node.js is a key. Correct answer is D
upvoted 7 times
a112883
3 years, 6 months ago
Node.js is key because we want to get those Node.js configs into AMI.
upvoted 2 times
...
...
easytoo
Most Recent 1 year, 11 months ago
In terms of reducing configuration time, option A has the potential to be faster. This is because AWS OpsWorks can automatically configure each new EC2 instance as it is launched. OpsWorks provides a framework for defining and managing applications using Chef or Puppet, allowing for automated configuration and deployment. On the other hand, option D involves using AWS Elastic Beanstalk with a custom AMI. While Elastic Beanstalk simplifies the deployment and management of applications, it may still require some initial configuration and setup time for the custom AMI. This could potentially result in longer configuration times compared to using OpsWorks for automated configuration.
upvoted 1 times
...
bihani
2 years, 3 months ago
Selected Answer: D
D is corretct
upvoted 1 times
...
Bulti
2 years, 3 months ago
Pre-baked EC2 instances along with node.js points to Elastic BeanStalk.
upvoted 1 times
Bulti
2 years, 3 months ago
Answer is D
upvoted 1 times
...
...
oopsy
3 years, 6 months ago
Go D -1
upvoted 1 times
...
Dantehilary
3 years, 6 months ago
I think the answer is A, D is wrong because THERE IS NOTHING LIKE Elastic Beanstalk load balance
upvoted 1 times
siejas
3 years, 6 months ago
They refer to a LB created by Beanstalk
upvoted 4 times
...
...
poylan
3 years, 6 months ago
i'll go with D
upvoted 1 times
...
WhyIronMan
3 years, 6 months ago
I'll go with D) Creating AMI is always a good practice when instances took up to 20 minutes to become fully configured...
upvoted 6 times
...
glam
3 years, 6 months ago
D. Use AWS Elastic Beanstalk with a custom AMI including all web components. Deploy the platform by using an Auto Scaling group behind an Application Load Balancer across multiple Availability Zones. Implement Amazon DynamoDB Auto Scaling. Use Amazon Route 53 to point the application DNS record to the Elastic Beanstalk load balancer.
upvoted 1 times
...
fogunfunminiyi
3 years, 6 months ago
D is the answer.
upvoted 1 times
...
fogunfunminiyi
3 years, 6 months ago
When your instance is taking time to boot, it means some configurations are going on probably through user data. May be like fetching some application artifacts from internet, and installing them. This is good but not the best way to configure your instance during booting with user data. The best way is to create a custom IMAGE with preconfigured applications. Then when you lunch the custom image, it automatically come with existing applications, thus reducing time to boot. For instance, AMI from AWS are preconfigured with cloud watch log agent. You don’t need to install it once you lunch AWS ima. But you have to manually when you lunched (using sudo yum install after ssh into the lunched ec2 or through user data ) when the instance is booting.
upvoted 1 times
...
[Removed]
3 years, 6 months ago
Yup, fully baked EC2 is the only option here (D) A would not speed things up, if anything, slow things down as chef would have to push that config in to the stock AMI
upvoted 2 times
...
Coffeinerd
3 years, 6 months ago
Right answer: D - custom AMI is key here it will reduce the provisioning time dramatically - main issue - and also multi-az and ALB are mentioned for resiliency and high-availability. Wrong: A - OpsWorks could do it but "automatically configure each new EC2 as it is launched" would keep the slow start issue. B - Could be right but does not mention multi-AZ as well is based on manual changes instead of auto scaling. C - you need Ec2 due to Node.js server-side
upvoted 4 times
...
jackdryan
3 years, 6 months ago
I'll go with D
upvoted 3 times
...
Dr_Wells
3 years, 6 months ago
Its Option D
upvoted 1 times
...
ChauPhan
3 years, 6 months ago
The answer should be D. Only with custom AMI we can reduce the configuration time. Because custom AMI that means all application/configuration was installed/built in-bulk/in ready and put into an AMI image and we don't need to configure anything after that.
upvoted 1 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago