exam questions

Exam AWS Certified Solutions Architect - Professional All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional exam

Exam AWS Certified Solutions Architect - Professional topic 1 question 511 discussion

A company has an application that generates a weather forecast that is updated every 15 minutes with an output resolution of 1 billion unique positions, each approximately 20 bytes in size (20 Gigabytes per forecast). Every hour, the forecast data is globally accessed approximately 5 million times (1,400 requests per second), and up to 10 times more during weather events. The forecast data is overwritten every update. Users of the current weather forecast application expect responses to queries to be returned in less than two seconds for each request.
Which design meets the required request rate and response time?

  • A. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an Amazon API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Enable API caching on the API Gateway stage with a cache-control timeout set for 15 minutes.
  • B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in the CloudFront distribution.
  • C. Store forecast locations in an Amazon ES cluster. Use an Amazon CloudFront distribution targeting an API Gateway endpoint with AWS Lambda functions responding to queries as the origin. Create an Amazon Lambda@Edge function that caches the data locally at edge locations for 15 minutes.
  • D. Store forecast locations in Amazon S3 as individual objects. Create an Amazon CloudFront distribution targeting an Elastic Load Balancing group of an Auto Scaling fleet of EC2 instances, querying the origin of the S3 object. Set the cache-control timeout for 15 minutes in the CloudFront distribution.
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
donathon
Highly Voted 3 years, 8 months ago
I have new insight after doing this question for 2nd time. B A: Cache control should be done at the Cloudfront not API Stage. B: EFS has better performance than S3. The data size is only 20GB so this seems suitable. C: Lambda@Edge does not cache data. Lambda@Edge is a feature of Amazon CloudFront that lets you run code closer to users of your application, which improves performance and reduces latency. With Lambda@Edge, you don't have to provision or manage infrastructure in multiple locations around the world. D: Why have the EC2 in the middle when CloudFront can set S3 as the origin?
upvoted 18 times
Frank1
3 years, 8 months ago
Cache control should be done at API gateway level. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
upvoted 3 times
...
JAWS1600
3 years, 8 months ago
A is good option. API Caching helps reduce calls made to origin. https://medium.com/@bhargavshah2011/api-gateway-caching-3f86034ca491. I just dont support B because it has EC2 Further more with number of location we have ( 1 billion) we need ES .Straight S3 wont do a good job for 15 minutes TTL.
upvoted 7 times
...
SD13
3 years, 7 months ago
Correct option is B. A is wring since Lambda can scale up to 3000 per sec based on region. and C is wrong since Lambda@edge can scale up to 10,000/sec, both are less than 15000 request/sec expected spike.
upvoted 3 times
...
aws_arn_name
3 years, 7 months ago
I don't understand B. If EFS mounted on EC2 so what is origin of CloudFront
upvoted 1 times
...
...
donathon
Highly Voted 3 years, 8 months ago
D Amazon EC2, Elastic Load Balancing, Amazon S3 buckets configured as website endpoints, or your own web server (HTTP). These are the only origin that you can define for CloudFront. EFS also has lower limits then S3 which make it less suitable for this case which may have 14k request per second. You can control how long your files stay in a CloudFront cache before CloudFront forwards another request to your origin. Reducing the duration allows you to serve dynamic content. Increasing the duration means your users get better performance because your files are more likely to be served directly from the edge cache. A longer duration also reduces the load on your origin. To change the cache duration for an individual file, you can configure your origin to add a Cache-Control max-age or Cache-Control s-maxage directive, or an Expires header field to the file.
upvoted 14 times
sarah1
3 years, 8 months ago
cloudfront can target APIgateway (and most other dns origins): https://aws.amazon.com/premiumsupport/knowledge-center/api-gateway-cloudfront-distribution/
upvoted 1 times
...
tiana528
3 years, 6 months ago
Not D. D says `Store forecast locations in Amazon S3 as individual objects`, the question says `15-minute weather prediction with a resolution of 1 billion distinct locations`. Uploading so many small objects to s3 every 15 minutes seems very ineffective. EFS is much more efficient.
upvoted 1 times
...
...
TravelKo
Most Recent 1 year, 9 months ago
you need to provide search capability. It should be C.
upvoted 1 times
...
Veres
1 year, 9 months ago
Selected Answer: D
Answer should D according to the AWS case study. The platform ingests information from more than 100 different sources and generates close to one-half terabyte (TB) of data each time it updates. The information is mapped and processed into forecast points that can be retrieved in real time, based on queries coming into the system. All data is stored in Amazon Simple Storage Service (Amazon S3), leveraging the efficiency of cloud storage as opposed to an on-premises storage solution and eliminating the hassle of managing a storage platform. https://aws.amazon.com/solutions/case-studies/the-weather-company/
upvoted 1 times
...
SkyZeroZx
1 year, 11 months ago
Selected Answer: B
B: EFS has better performance than S3. The data size is only 20GB so this seems suitable.
upvoted 1 times
...
Heer
2 years, 3 months ago
ChatGPT Output:OPTION B: Amazon EFS (Elastic File System) is a scalable, distributed file system that provides high-performance access to data over the network. By storing the forecast data in an Amazon EFS volume, you can provide access to the same data from multiple EC2 instances in a fleet. This makes it easier to scale the infrastructure horizontally and provide better performance as the load increases. Additionally, Amazon EFS provides better data persistence compared to Amazon S3, which means that the data will be stored on disk and will persist even if the EC2 instances are terminated. This is important as the forecast data is updated every 15 minutes and needs to be accessible to the users at all times. Overall, the combination of Amazon EFS and Amazon CloudFront provides better scalability and performance compared to storing the forecast data in Amazon S3 and using CloudFront for distribution.
upvoted 1 times
...
RRRichard
2 years, 5 months ago
It should be A. API Gateway has Endpoint cache enablement. https://docs.aws.amazon.com/apigateway/latest/developerguide/api-gateway-caching.html
upvoted 1 times
...
Vizz5585
2 years, 7 months ago
Selected Answer: B
The answer is B. Lambdas have concurrency limits S3 has minimum storage limits
upvoted 1 times
...
tomosabc1
2 years, 8 months ago
Selected Answer: D
I think the answer should be D. The following is what I consolidated after reading the analysis from all other comments. The question seems to be inspired from the actual case study of the weather company. All their data are stored in S3. https://aws.amazon.com/solutions/case-studies/the-weather-company/ A(wrong): Cache-control is not available for API Gateway, for which it is TTL.
upvoted 3 times
tomosabc1
2 years, 8 months ago
B(wrong): EFS limits : 1 read = 1 Operation 1 Write = 5 Operations. EFS suports 35000 read operations limit only if you are just READING and not WRITING anything. EFS has 7000 Write Operations limit limiting only if you are just WRITING and not READING anything So EFS cannot handle 1 billlion files ( each 20 bytes) write requests in 15mins. C(wrong): Maximum RPS for API Gateway is 10,000requests/s, for lambda it is 1,000requests/s. They can't meet with the requirements of maximum 14,000+ requests/s during whether events. In addition, Lambda@Edge is not used to cache data at edge locations for the specific time. https://aws.amazon.com/blogs/networking-and-content-delivery/lambdaedge-design-best-practices/
upvoted 2 times
...
...
linuxmaster007
2 years, 8 months ago
Answer is B. Lambda can only handle 10,000 request. Also B is the answer per dojo tutorials.
upvoted 2 times
...
Sumit_Kumar
2 years, 10 months ago
Amazon S3 automatically scales to high request rates. For example, your application can achieve at least 3,500 PUT/COPY/POST/DELETE or 5,500 GET/HEAD requests per second per partitioned prefix.
upvoted 1 times
...
cldy
3 years, 6 months ago
B. Store forecast locations in an Amazon EFS volume. Create an Amazon CloudFront distribution that targets an Elastic Load Balancing group of an Auto Scaling fleet of Amazon EC2 instances that have mounted the Amazon EFS volume. Set the cache-control timeout for 15 minutes in the CloudFront distribution.
upvoted 1 times
...
AzureDP900
3 years, 6 months ago
D is right
upvoted 3 times
...
Kopa
3 years, 7 months ago
Im going for B
upvoted 1 times
...
StelSen
3 years, 7 months ago
The "cache-control timeout is possible in CloudFront only. API Gateway is time to live. Lambda@Edge don't have cache-control timeout option. This left with only either B or D is right. Now, both B&D uses EC2/ASG. But From EC2, accessing EFS is faster than accessing S3. https://dzone.com/articles/confused-by-aws-storage-options-s3-ebs-amp-efs-explained. So I chose "B".
upvoted 2 times
...
student22
3 years, 7 months ago
B --- B vs D - EFS better than S3 to query many small files frequently. A & D - API gateway will throttle at 10k rpm by default.
upvoted 2 times
...
blackgamer
3 years, 7 months ago
Yes, this is B. Lambda is out because of concurrent limit and response time, s3 is out because of update frequency.
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...