exam questions

Exam AWS Certified Solutions Architect - Professional All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional exam

Exam AWS Certified Solutions Architect - Professional topic 1 question 564 discussion

A mobile gaming application publishes data continuously to Amazon Kinesis Data Streams. An AWS Lambda function processes records from the data stream and writes to an Amazon DynamoDB table. The DynamoDB table has an auto scaling policy enabled with the target utilization set to 70%.
For several minutes at the start and end of each day, there is a spike in traffic that often exceeds five times the normal load. The company notices the
GetRecords.IteratorAgeMilliseconds metric of the Kinesis data stream temporarily spikes to over a minute for several minutes. The AWS Lambda function writes
ProvisionedThroughputExceededException messages to Amazon CloudWatch Logs during these times, and some records are redirected to the dead letter queue.
No exceptions are thrown by the Kinesis producer on the gaming application.
What change should the company make to resolve this issue?

  • A. Use Application Auto Scaling to set a scaling schedule to scale out write capacity on the DynamoDB table during predictable load spikes.
  • B. Use Amazon CloudWatch Events to monitor the dead letter queue and invoke a Lambda function to automatically retry failed records.
  • C. Reduce the DynamoDB table auto scaling policy's target utilization to 20% to more quickly respond to load spikes.
  • D. Increase the number of shards in the Kinesis data stream to increase throughput capacity.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
hailiang
Highly Voted 3 years, 8 months ago
Its A. The alerts clearly indicate the problem was caused by sudden spike in traffic. Autoscaling on DDB didnt work because the suddenness of the spike, which is why you need to scale out the DDB before the traffic spike comes in rather than wait for the actual spike to trigger the scaling
upvoted 19 times
sam422
3 years, 8 months ago
It makes sense to auto scale dynamodb when cpu utilisation is being spiked, rather than predicting the spike time
upvoted 2 times
...
sarah_t
3 years, 6 months ago
This https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html points to C, not A
upvoted 1 times
sarah_t
3 years, 6 months ago
However, after reading this https://aws.amazon.com/about-aws/whats-new/2017/11/scheduled-scaling-now-available-for-application-auto-scaling/ I am probably going with A...
upvoted 2 times
...
...
...
b3llman
Highly Voted 3 years, 8 months ago
Ans: C Although it had auto scaling enabled in Dynamodb, it did not scale quick enough. Dynamodb's auto scaling relies on cloudwatch alarms and it takes at least a minute to trigger each scaling based on the 70% utilisation target. This was explained in the GetRecords.IteratorAgeMilliseconds matrix from Kinesis that lambda was not getting records from Kinesis quick enough. https://docs.aws.amazon.com/streams/latest/dev/monitoring-with-cloudwatch.html Since the spikes were huge and it hit the provisioned WCU during that time before auto-scaling could kick in. It resulted in ProvisionedThroughputExceededException from Dynamodb. As a result, it took a few rounds (a few mins) to scale to the desired utilisation target. https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/AutoScaling.html So, the solution is to lower the utilisation target and let it scale ASAP.
upvoted 9 times
...
sumaju
Most Recent 1 year, 5 months ago
Selected Answer: D
Based on this article, I would go by D. If you have only one consumer application, it is always possible to read at least two times faster than the put rate. That’s because you can write up to 1,000 records per second for writes, up to a maximum total data write rate of 1 MB per second (including partition keys). Each open shard can support up to 5 transactions per second for reads, up to a maximum total data read rate of 2 MB per second. Note that each read (GetRecords call) gets a batch of records. The size of the data returned by GetRecords varies depending on the utilization of the shard. The maximum size of data that GetRecords can return is 10 MB. If a call returns that limit, subsequent calls made within the next 5 seconds throw ProvisionedThroughputExceededException. https://docs.aws.amazon.com/streams/latest/dev/troubleshooting-consumers.html
upvoted 1 times
...
Jesuisleon
2 years ago
Selected Answer: A
A is right and D is wrong. At first I chose D but after reading this link:https://docs.aws.amazon.com/streams/latest/dev/troubleshooting-consumers.html a high value of GetRecords.IteratorAgeMilliseconds means "a consumer is not keeping up with the stream because it is not processing records fast enough" It clearly indicates the consumer side problem which is dynamodb side. so increasing shard just makes it worse.
upvoted 1 times
...
evargasbrz
2 years, 4 months ago
Selected Answer: D
Regarding this: https://aws.amazon.com/pt/premiumsupport/knowledge-center/kinesis-data-streams-iteratorage-metric/ To address the following root cause "RecordsProcessed": You can also check the overall throughput of the Kinesis data stream by monitoring the CloudWatch metrics IncomingBytes and IncomingRecords. For more information about KCL and custom CloudWatch metrics, see Monitoring the Kinesis Client Library with Amazon CloudWatch. However, if the processing time cannot be reduced, then consider upscaling the Kinesis stream by increasing the number of shards. so D looks good.
upvoted 1 times
...
JohnPi
2 years, 7 months ago
DynamoDB auto scaling modifies provisioned throughput settings only when the actual workload stays elevated (or depressed) for a sustained period of several minutes. The Application Auto Scaling target tracking algorithm seeks to keep the target utilization at or near your chosen value over the long term. Sudden, short-duration spikes of activity are accommodated by the table's built-in burst capacity.
upvoted 2 times
...
AwsBRFan
2 years, 8 months ago
Selected Answer: A
Since issue can be related to consumers, then changing to A
upvoted 2 times
...
AwsBRFan
2 years, 8 months ago
Selected Answer: D
https://aws.amazon.com/pt/premiumsupport/knowledge-center/kinesis-data-streams-iteratorage-metric/ "However, if the processing time cannot be reduced, then consider upscaling the Kinesis stream by increasing the number of shards."
upvoted 2 times
...
jj22222
3 years, 1 month ago
Selected Answer: A
A. Use Application Auto Scaling to set a scaling schedule to scale out write capacity on the DynamoDB table during predictable load spikes.
upvoted 2 times
...
limeboi18
3 years, 4 months ago
Selected Answer: A
I think it's A
upvoted 1 times
...
tkanmani76
3 years, 5 months ago
Option A. This is a case of piling records for processing. Kinesis GetRecords.IteratorAgeMilliseconds increasing indicates that records are being processed slowly and this higlights the risk of records expiring. ProvisionedThroughputExceededException indicates request rate is too high. AWS API Doc says - Reduce the frequency of requests and use exponential backoff so they can be processed. To ensure the records are processed quickly during surge times which is known ahead write capacity should be increased.
upvoted 2 times
tkanmani76
3 years, 5 months ago
Related information - When Kinesis Producer is writing to KDS - the capacity is determined by the number of shards ( provisioned mode where the load is known). AWS supports on-demand mode where the shards are scaled up/down. Each shard for writing is able to handle 1MB/Sec. So if we need to increase write we need to increase the shards. This is not relevant in our case as the data is getting written and Lambda is able to read from the shards.
upvoted 2 times
...
...
AzureDP900
3 years, 5 months ago
A is right answer based on traffic surge that often surpasses five times the average load
upvoted 1 times
...
kirrim
3 years, 6 months ago
You can tell the issue is with DynamoDB because Lambda is reporting a ProvisionedThroughputExceededException, which is part of the DynamoDB SDK that Lambda code is using, indicating DynamoDB cannot keep up. So you know you're dealing with A or C. The root of the problem is that even though DynamoDB is set up for autoscaling, it takes a few minutes for it to happen. Merely adjusting the auto scaling policy thresholds can't change that fact, it's still going to take a while to scale up. If the traffic was a slow ramp up, you might be able to get away with C, but this is a sudden flood that happens twice per day. Since this is very predictable and on a schedule, the easiest method is to schedule the scale-up to happen in advance of the flood hitting. (A) https://aws.amazon.com/premiumsupport/knowledge-center/kinesis-data-streams-iteratorage-metric/ https://docs.aws.amazon.com/AWSJavaSDK/latest/javadoc/com/amazonaws/services/dynamodbv2/model/ProvisionedThroughputExceededException.html https://docs.aws.amazon.com/autoscaling/application/userguide/application-auto-scaling-scheduled-scaling.html
upvoted 5 times
...
tgv
3 years, 6 months ago
AAA ---
upvoted 2 times
...
WhyIronMan
3 years, 6 months ago
I'll go with A
upvoted 1 times
...
Kopa
3 years, 6 months ago
Im for A, it happens on scheduled time so why not choose schedule automatic scale...
upvoted 2 times
...
Waiweng
3 years, 6 months ago
it's A
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...