exam questions

Exam AWS Certified Solutions Architect - Associate SAA-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Associate SAA-C02 exam

Exam AWS Certified Solutions Architect - Associate SAA-C02 topic 1 question 228 discussion

A solutions architect is designing a solution that involves orchestrating a series of Amazon Elastic Container Service (Amazon ECS) task types running on
Amazon EC2 instances that are part of an ECS cluster. The output and state data for all tasks needs to be stored. The amount of data output by each task is approximately 10 MB, and there could be hundreds of tasks running at a time. The system should be optimized for high-frequency reading and writing. As old outputs are archived and deleted, the storage size is not expected to exceed 1 TB.
Which storage solution should the solutions architect recommend?

  • A. An Amazon DynamoDB table accessible by all ECS cluster instances.
  • B. An Amazon Elastic File System (Amazon EFS) with Provisioned Throughput mode.
  • C. An Amazon Elastic File System (Amazon EFS) file system with Bursting Throughput mode.
  • D. An Amazon Elastic Block Store (Amazon EBS) volume mounted to the ECS cluster instances.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
sk4tto
Highly Voted 3 years, 8 months ago
Agree with B. With the Bursting Throughput mode, which is the default mode, the amount of throughput scales as your file system grows. So the more you store, the more throughput is available to you. Using the bursting throughput mode does not incur any additional charges and you have a baseline rate of 50 KB/s per GB of throughput that comes included with the price you pay for your EFS standard storage. Provisioned Throughput allows you to burst above your allocated allowance, which is based upon your file system size. So if your file system was relatively small but the use case for your file system required a high throughput rate, then the default bursting throughput options may not be able to process your request quickly enough. In this instance, you would need to use provisioned throughput. However, this option does incur additional charges where you will need to pay for any bursting above the default capacity allowed from the standard bursting throughput.
upvoted 136 times
fwfw
3 years, 7 months ago
thanks for explanation, easy to understand.
upvoted 7 times
...
Teekay1009
3 years, 6 months ago
thanks well explained!!!
upvoted 1 times
...
mahdeo01
3 years, 6 months ago
Beautiful explanation !!! ( one point to add in case of AWS. always remember - "Provisioned" means - GUARANTEED !!! )
upvoted 20 times
...
...
_Drj_
Highly Voted 3 years, 8 months ago
Keywords: data for all tasks needs to be stored - meaning EFS by each task is approximately 10 MB - meaning storage could get really low once archived optimized for high-frequency reading and writing not expected to exceed 1 TB - this one is begging not to chose Bursting mode “There are two throughput modes to choose from for your file system, Bursting Throughput and Provisioned Throughput. With Bursting Throughput mode, throughput on Amazon EFS scales as the size of your file system in the standard storage class grows. For more information about EFS storage classes, see EFS storage classes. With Provisioned Throughput mode, you can instantly provision the throughput of your file system (in MiB/s) independent of the amount of data stored.” High Throughput regardless of storage which can be provided only by B Reference: https://docs.aws.amazon.com/efs/latest/ug/performance.html
upvoted 28 times
dave0808
3 years, 7 months ago
yes B is the way
upvoted 4 times
...
...
Deepankan
Most Recent 1 year, 6 months ago
Selected Answer: B
BBBBBBBBBBBBBBB
upvoted 1 times
...
fro13
1 year, 10 months ago
Selected Answer: B
Provisioned to support the use case
upvoted 1 times
...
BECAUSE
1 year, 11 months ago
Selected Answer: B
B is the answer
upvoted 1 times
...
zek
2 years, 4 months ago
New answer is D !
upvoted 1 times
...
hollie
2 years, 6 months ago
Why does EBS not work? Because it mentions "running on Amazon EC2 instances", EBS does not share among EC2 instances.
upvoted 1 times
...
ahaz
2 years, 9 months ago
Selected Answer: B
https://docs.aws.amazon.com/efs/latest/ug/performance.html "When burst credits are available, a file system can drive up to 100 MBps per terabyte (TB) of storage, with a minimum of 100 MBps. If no burst credits are available, a file system can drive up to 50 MBps per TB of storage with a minimum of 1 MBps" It is mentioned that the size of the EFS is not going to exceed 1 TB, and each job outputs 100 MB of data, and there will be hundreds of them. If we go with the burst throughput, we will have maximum 100 MB/s which will be enough just for 10 job output. So, we need provisioned throughput, which is B.
upvoted 1 times
...
FF11
3 years, 4 months ago
Selected Answer: B
B is correct.
upvoted 1 times
...
sunhyeok
3 years, 5 months ago
Selected Answer: B
B IS RIGHT
upvoted 1 times
...
tinyshare
3 years, 6 months ago
Answer B: Provisioned Throughput mode is when Burst Throughtput mode is NOT enough. Is it enough? Burst can do 1TB file: 50MB continuouly and 100MB up to 12 hours What we need: each task 10MB and hundreds of tasks => more than 1TB thoughput. So not enough Have to escalate to Provisioned. https://docs.aws.amazon.com/efs/latest/ug/performance.html Answer B
upvoted 1 times
...
lalia
3 years, 6 months ago
I think B Amazon EFS Provisioned Throughput is available for applications with a high throughput to storage (MB/s per TB) ratio. For example, customers using Amazon EFS for development tools, web serving or content management applications, where the amount of data in their file system is low relative to throughput demands, are able to instantly get the high levels of throughput their applications require https://www.amazonaws.cn/en/efs/faq/
upvoted 2 times
...
borisrabin03
3 years, 6 months ago
The answer is c throughput for file operations scales with your file system usage. Depending on the size of your data you get a certain number of burst credits, which allow you to get higher throughput for a limited time. For example, a 1-TiB file system runs continuously at a throughput of 50 MiB/second and is allowed to burst to 100 MiB/s for 12 hours each day.
upvoted 2 times
...
NSF2
3 years, 6 months ago
The answer is C as per below. Q. What throughput can I drive against files stored in the EFS Standard-IA or EFS One Zone-IA storage class? The throughput you can drive against an Amazon EFS file system scales linearly with the amount of data stored on the EFS Standard or EFS One Zone storage classes. All Amazon EFS file systems, regardless of size, can burst to 100 MiB/s of throughput. File systems with more than 1 TiB of data stored on EFS Standard or EFS One Zone storage classes can burst to 100 MiB/s per TiB of data stored on EFS Standard or EFS One Zone storage classes. If you require higher amounts of throughput to EFS Standard-IA or EFS One Zone-IA storage classes than your file system allows, use Amazon EFS Provisioned Throughput. https://aws.amazon.com/efs/faq/
upvoted 4 times
Iamrandom
3 years, 6 months ago
Size (1TB=1000GB) is not enough to cover throughput requirement: @50 KB/s per GB of throughput in burst mode --> 50*1000GB = 50000KB/s=50MB/s while you have to manage an "amount of data output by each task is approximately 10 MB, and there could be hundreds of tasks running at a time" --> 10MB * 200 ("hundreds") = 2000MB which looks like another order of magnitude... ok you can burst, but doesn't look stable. There are no cost restriction mentioned therefore B is the way to go, provisioned throughput (BTW just the fact that you have an idea of the throughput should suggest to go provisioned).
upvoted 3 times
...
...
Abdullah777
3 years, 7 months ago
"The system should be optimized for high-frequency reading and writing" we cant make the throughput based on the size here where we have small size essentially. Ans is B.
upvoted 2 times
...
syu31svc
3 years, 7 months ago
I would take B D is out as EBS is of lower performance than EFS https://docs.aws.amazon.com/efs/latest/ug/performance.html: Throughput scale for EFS is higher than EBS A is eliminated since DynamoDB is not used for such a scenario "optimized for high-frequency reading and writing" -> take advantage of AWS capabilities https://aws.amazon.com/efs/faq/: "Provisioned Throughput enables Amazon EFS customers to provision their file system’s throughput independent of the amount of data stored, optimizing their file system throughput performance to match their application’s needs."
upvoted 4 times
gargaditya
3 years, 6 months ago
Agree, 'Provisioned' Throughput allows scaling IOPS independent of storage size->Q says "should be tuned" +Dynamo DB ruled out because each item(row) is max 400 KB, here each job(output) is 10 MB.
upvoted 1 times
gargaditya
3 years, 6 months ago
https://docs.aws.amazon.com/AmazonECS/latest/bestpracticesguide/storage.html By default, containers don't persist the data they produce. When a container is terminated, the data that it wrote to its writable layer gets destroyed with the container. This makes containers suitable for stateless applications that don't need to store data locally. Containerized applications that require data persistence need a storage backend that isn't destroyed when the application’s container terminates. With Amazon ECS, you can run stateful containers using volumes. Amazon ECS is integrated with Amazon EFS natively, and uses volumes that are integrated with Amazon EBS. For Windows containers, Amazon ECS integrates with FSx for Windows File Server to provide persistent storage. Confused with EBS now!
upvoted 1 times
...
...
...
KK_uniq
3 years, 7 months ago
Provisioned throughput B for sure
upvoted 2 times
AjitS
3 years, 7 months ago
Ans is A as Amazon DynamoDB Auto Scaling enables a table or a global secondary index to increase its provisioned read and write capacity to handle sudden increases in traffic, without throttling.
upvoted 1 times
gargaditya
3 years, 6 months ago
Dynamo DB row (item) size is only 400 KB, we need 10 MB here(each job size)
upvoted 1 times
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...