exam questions

Exam AWS Certified Database - Specialty All Questions

View all questions & answers for the AWS Certified Database - Specialty exam

Exam AWS Certified Database - Specialty topic 1 question 135 discussion

Exam question from Amazon's AWS Certified Database - Specialty
Question #: 135
Topic #: 1
[All AWS Certified Database - Specialty Questions]

To meet new data compliance requirements, a company needs to keep critical data durably stored and readily accessible for 7 years. Data that is more than 1 year old is considered archival data and must automatically be moved out of the Amazon Aurora MySQL DB cluster every week. On average, around 10 GB of new data is added to the database every month. A database specialist must choose the most operationally efficient solution to migrate the archival data to
Amazon S3.
Which solution meets these requirements?

  • A. Create a custom script that exports archival data from the DB cluster to Amazon S3 using a SQL view, then deletes the archival data from the DB cluster. Launch an Amazon EC2 instance with a weekly cron job to execute the custom script.
  • B. Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
  • C. Configure two AWS Lambda functions: one that exports archival data from the DB cluster to Amazon S3 using the mysqldump utility, and another that deletes the archival data from the DB cluster. Schedule both Lambda functions to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
  • D. Use AWS Database Migration Service (AWS DMS) to continually export the archival data from the DB cluster to Amazon S3. Configure an AWS Data Pipeline process to run weekly that executes a custom SQL script to delete the archival data from the DB cluster.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Jaypdv
Highly Voted 3 years, 7 months ago
Going for B. since SELECT INTO OUTFILE S3 is available on Aurora. Option C. uses mysqldump who does not dump directly to S3
upvoted 14 times
...
MultiAZ
Most Recent 1 year, 4 months ago
Selected Answer: B
Anser is B. The data should be readily accessible (e.g. via Athena), so mysqldump is not useful
upvoted 1 times
...
Pranava_GCP
1 year, 9 months ago
Selected Answer: B
B. Configure an AWS Lambda function that exports archival data from the DB cluster to Amazon S3 using a SELECT INTO OUTFILE S3 statement, then deletes the archival data from the DB cluster. Schedule the Lambda function to run weekly using Amazon EventBridge (Amazon CloudWatch Events).
upvoted 1 times
...
rags1482
2 years, 7 months ago
If the amount of data to be selected is large (more than 25 GB), we recommend that you use multiple SELECT INTO OUTFILE S3 statements to save the data to Amazon S3 Answer: B
upvoted 1 times
...
sachin
2 years, 10 months ago
B is correct approch. C mysqldump can not dump into S3 https://aws.amazon.com/blogs/database/best-practices-for-exporting-and-importing-data-from-amazon-aurora-mysql-to-amazon-s3/
upvoted 1 times
...
novice_expert
3 years, 1 month ago
Selected Answer: B
B because: 1. Lambda function max run time is 15 min https://aws.amazon.com/about-aws/whats-new/2018/10/aws-lambda-supports-functions-that-can-run-up-to-15-minutes/ 2. SELECT INTO OUTFILES3 is there, and 10GB data per week sounds reasonable to finish copying within 15 min https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html AWS DMS can copy to S3 but option D says continuilly export, while we need weekly https://aws.amazon.com/blogs/database/archiving-data-from-relational-databases-to-amazon-glacier-via-aws-dms/
upvoted 1 times
...
pcpcpc888
3 years, 3 months ago
running a continually DMS job would NOT be operationally efficient, when talk about which, serverless options combined with Lambda and EventBridge would be a much better choice; considering the volume of the weekly archival, the duration would not hit Lambda timeout; however, it seems like more development would be needed for C, cause Select Into Outfile S3 directly integrates with S3. So B.
upvoted 3 times
...
Raj12131
3 years, 4 months ago
Option A requires more effort and hence can be ruled out. Option B uses same lambda function for data migration and deletion thereafter. It doesn't work as lambda might timeout. Option C uses mysqldump which is ok but not as efficient as DMS. Option D is the correct solution in my view.
upvoted 1 times
...
Shunpin
3 years, 5 months ago
Selected Answer: B
For option D, I will consider how DMS export data export to S3 looks like and also how DMS handle "delete" CDC statements. With DMS option, you need additional tasks to filter data and not easy to maintain.
upvoted 1 times
...
SMAZ
3 years, 5 months ago
I think its D https://aws.amazon.com/blogs/database/archiving-data-from-relational-databases-to-amazon-glacier-via-aws-dms/
upvoted 1 times
...
jove
3 years, 5 months ago
Lambda functions have 15mins max execution time. If the extract and delete takes longer than 15 mins using a Lambda function won't work. This limitation might rule out option B and C. Option D will work but "continually export the archival data" is not a requirement. Thoughts?
upvoted 1 times
VPup
3 years, 3 months ago
Good catch on the 15 min limit for Lambda! But in the context of the question - " Each month, around 10 GB of fresh data is uploaded to the database." - I would assume 2.5 GB weekly data volume - seems reasonable to assume that the export and delete will be done within 15 min. so B is still an option here
upvoted 1 times
...
...
Aesthet
3 years, 7 months ago
B final answer
upvoted 2 times
...
manan728
3 years, 7 months ago
B is correct. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/AuroraMySQL.Integrating.SaveIntoS3.html
upvoted 4 times
...
Chhotu_DBA
3 years, 7 months ago
Option B correct
upvoted 2 times
...
novak18
3 years, 8 months ago
Answer should be D? https://docs.aws.amazon.com/dms/latest/userguide/CHAP_Target.S3.html https://docs.aws.amazon.com/datapipeline/latest/DeveloperGuide/dp-object-sqlactivity.html
upvoted 1 times
faramawi
3 years, 7 months ago
I think it should be D too. I think it provides "most operationally efficient solution to migrate the archival data to Amazon S3 ". https://aws.amazon.com/blogs/database/archiving-data-from-relational-databases-to-amazon-glacier-via-aws-dms/ https://aws.amazon.com/blogs/database/replicate-data-from-amazon-aurora-to-amazon-s3-with-aws-database-migration-service/
upvoted 1 times
Justu
3 years, 5 months ago
Can you use AWS Data Pipeline process Custom SQL Query to delete data from RDS?
upvoted 1 times
jove
3 years, 5 months ago
Yes you can but I'm not sure if using DMS is the right option
upvoted 1 times
Jiang_aws1
2 years, 7 months ago
DMS is for DB migration tools & very $$$ so we just use time by time but let it run as job tools. so Lambda is right tools for this .
upvoted 1 times
...
...
...
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...