exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 448 discussion

A company is deploying a new web-based application and needs a storage solution for the Linux application servers. The company wants to create a single location for updates to application data for all instances. The active dataset will be up to 100 GB in size. A solutions architect has determined that peak operations will occur for 3 hours daily and will require a total of 225 MiBps of read throughput.

The solutions architect must design a Multi-AZ solution that makes a copy of the data available in another AWS Region for disaster recovery (DR). The DR copy has an RPO of less than 1 hour.

Which solution will meet these requirements?

  • A. Deploy a new Amazon Elastic File System (Amazon EFS) Multi-AZ file system. Configure the file system for 75 MiBps of provisioned throughput. Implement replication to a file system in the DR Region.
  • B. Deploy a new Amazon FSx for Lustre file system. Configure Bursting Throughput mode for the file system. Use AWS Backup to back up the file system to the DR Region.
  • C. Deploy a General Purpose SSD (gp3) Amazon Elastic Block Store (Amazon EBS) volume with 225 MiBps of throughput. Enable Multi-Attach for the EBS volume. Use AWS Elastic Disaster Recovery to replicate the EBS volume to the DR Region.
  • D. Deploy an Amazon FSx for OpenZFS file system in both the production Region and the DR Region. Create an AWS DataSync scheduled task to replicate the data from the production file system to the DR file system every 10 minutes.
Show Suggested Answer Hide Answer
Suggested Answer: A 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
e4bc18e
Highly Voted 11 months, 4 weeks ago
So practically everyone here is wrong. because it is A. Here is why B is wrong because one there is no such thing as bursting mode for Lustre that is an EFS thing, but also Backup will not work for the RPO. C is wrong obviously because GP3 can't be shared. D is wrong because Datasync tasks cannot be scheduled for any more frequent then hourly so no D is wrong because you cannot schedule data sync tasks less then hourly so you don't meet the RPO. So all of those are easily wrong because they have bad information. They fooled everyone on A because all they say is the 'Active working set is 100GB" not the entire filesystem. EFS accumulates bursting credits so for every 100GB of filesystem size you can burst up to 300MiBps for up to 72 minutes. So you provision 75MiBps because that would average out over time so you aren't being overcharged for the provisioned size.
upvoted 21 times
AzureDP900
5 months, 3 weeks ago
I agree with your explanation, I will go with A
upvoted 1 times
...
...
vip2
Most Recent 9 months, 4 weeks ago
Selected Answer: A
A scheduled task runs at a frequency that you specify, with a minimum interval of 1 hour. https://docs.aws.amazon.com/datasync/latest/userguide/task-scheduling.html
upvoted 3 times
...
Helpnosense
10 months, 1 week ago
Selected Answer: A
A. EFS support cross region replication. e4bc18e already point why D is wrong.
upvoted 4 times
...
trungtd
10 months, 3 weeks ago
Selected Answer: A
big thank to e4bc18e
upvoted 4 times
...
Zas1
11 months, 2 weeks ago
Selected Answer: A
A Solution write by e4bc18e
upvoted 3 times
...
titi_r
1 year ago
Selected Answer: D
D is correct. "You can use DataSync to transfer files between two FSx for OpenZFS file systems, and also move data to a file system in a different AWS Region or AWS account. You can also use DataSync with FSx for OpenZFS file systems for other tasks. For example, you can perform one-time data migrations, periodically ingest data for distributed workloads, and schedule replication for data protection and recovery." https://docs.aws.amazon.com/fsx/latest/OpenZFSGuide/migrate-files-to-fsx-datasync.html
upvoted 2 times
e4bc18e
11 months, 4 weeks ago
This is wrong a Datasync task cannot be schedule for any more frequent then one hour so the under 1 hour RPO is not met.
upvoted 2 times
titi_r
11 months, 2 weeks ago
@e4bc18e, it seems you are right. Indeed, DataSync can go as granular as 1 hour. Found this: "If the file system’s baseline throughput exceeds the Provisioned throughput amount, then it automatically uses the Bursting throughput..." For 1 TiB of metered data in Standard storage, it can burst to 300 MiBps read-only for 12 hours per day. https://docs.aws.amazon.com/efs/latest/ug/performance.html#throughput-modes
upvoted 1 times
...
...
...
ovladan
1 year ago
Selected Answer: B https://docs.aws.amazon.com/fsx/latest/LustreGuide/performance.html#fsx-aggregate-perf
upvoted 1 times
titi_r
1 year ago
“B” is wrong because with AWS Backup you can do a backup as frequent as 1 hour, but the RPO must be less than 1 hour. https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html#create-backup-plan-console
upvoted 1 times
...
...
adelynllllllllll
1 year, 1 month ago
D: The throughput is related to size of the EFS, but the question said the active set of the data will be only up to 100GB, with that size, the throughout will be lower than requested. so D:
upvoted 1 times
...
VerRi
1 year, 1 month ago
Selected Answer: A
D involves managing separate file systems that do not natively offer a "single location" experience across regions without additional configuration and replication mechanisms.
upvoted 3 times
...
pangchn
1 year, 1 month ago
Selected Answer: D
D a sneaky question since my first impression is go for A but it is wrong due to the 75M throughput mode. What's the calculation here? one region has 3 AZ? so 75x3=225?. EFS is not provisioned in that way. Even that, the 225 is the total throughput where question asked 225 for read. Implied the total would be more like 225+XXX. Anyway, A is wrong. https://docs.aws.amazon.com/efs/latest/ug/performance.html C is wrong since EBS multi attach don't support gp3 https://docs.aws.amazon.com/ebs/latest/userguide/ebs-volumes-multi.html
upvoted 4 times
pangchn
1 year, 1 month ago
B is wrong where the hourly AWS backup job won't meet the RPO requirement (less than 1 hour) The backup frequency determines how often AWS Backup creates a snapshot backup. Using the console, you can choose a frequency of every hour, 12 hours, daily, weekly, or monthly. You can also create a cron expression that creates snapshot backups as frequently as hourly. Using the AWS Backup CLI, you can schedule snapshot backups as frequently as hourly https://docs.aws.amazon.com/aws-backup/latest/devguide/creating-a-backup-plan.html
upvoted 3 times
...
...
Dgix
1 year, 1 month ago
Selected Answer: D
D is the answer. A would also have worked.
upvoted 1 times
...
CMMC
1 year, 1 month ago
Selected Answer: D
Amazon FSx for OpenZFS is a fully managed file system service that supports native replication between regions, making it well-suited for DR scenarios with a low RPO requirement. Using AWS DataSync for replication every 10 minutes ensures that the DR copy stays up to date with minimal data loss. This solution provides the required read throughput, data replication, and DR capabilities with less operational overhead.
upvoted 1 times
e4bc18e
11 months, 4 weeks ago
Wrong Datasync tasks cannot be scheduled to be more frequent then hourly, so you cannot schedule data sync tasks to be every 10 Minutes. Apparently everyone is forgetting about burst credits for EFS. Probably something a little missing but it only says the "Active working set" is 100GB" not the entire filesystem. For every 100GB of data of provisioned EFS space you can burst to 300MiBps for 72 minutes.
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago