exam questions

Exam AWS Certified Solutions Architect - Professional All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional exam

Exam AWS Certified Solutions Architect - Professional topic 1 question 681 discussion

A financial services company receives a regular data feed from its credit card servicing partner. Approximately 5,000 records are sent every 15 minutes in plaintext, delivered over HTTPS directly into an Amazon S3 bucket with server-side encryption. This feed contains sensitive credit card primary account number
(PAN) data. The company needs to automatically mask the PAN before sending the data to another S3 bucket for additional internal processing. The company also needs to remove and merge specific fields, and then transform the record into JSON format. Additionally, extra feeds are likely to be added in the future, so any design needs to be easily expandable.
Which solutions will meet these requirements?

  • A. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Trigger another Lambda function when new messages arrive in the SQS queue to process the records, writing the results to a temporary location in Amazon S3. Trigger a final Lambda function once the SQS queue is empty to transform the records into JSON format and send the results to another S3 bucket for internal processing.
  • B. Trigger an AWS Lambda function on file delivery that extracts each record and writes it to an Amazon SQS queue. Configure an AWS Fargate container application to automatically scale to a single instance when the SQS queue contains messages. Have the application process each record, and transform the record into JSON format. When the queue is empty, send the results to another S3 bucket for internal processing and scale down the AWS Fargate instance.
  • C. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
  • D. Create an AWS Glue crawler and custom classifier based upon the data feed formats and build a table definition to match. Perform an Amazon Athena query on file delivery to start an Amazon EMR ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, send the results to another S3 bucket for internal processing and scale down the EMR cluster.
Show Suggested Answer Hide Answer
Suggested Answer: C 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
liono
Highly Voted 3 years, 7 months ago
C seems to be correct https://docs.aws.amazon.com/glue/latest/dg/trigger-job.html
upvoted 23 times
fabianjanu
3 years, 7 months ago
I agree. A) can bring cost problems and concurrency limits in lambda. Furthermore, Glue already solves these issues with much less development.
upvoted 4 times
...
...
blackgamer
Highly Voted 3 years, 6 months ago
C is the correct answer. https://d1.awsstatic.com/Products/product-name/diagrams/product-page-diagram_Glue_Event-driven-ETL-Pipelines.e24d59bb79a9e24cdba7f43ffd234ec0482a60e2.png
upvoted 7 times
kirrim
3 years, 6 months ago
Beautiful diagram! Just in case the URL for that image gets modifed, scroll down to "Use Cases" on the home page for Glue: https://aws.amazon.com/glue/
upvoted 1 times
...
...
milofficial
Most Recent 2 years, 1 month ago
Selected Answer: C
C is correct
upvoted 1 times
...
CloudHell
2 years, 10 months ago
I'm going with C.
upvoted 1 times
...
cldy
3 years, 5 months ago
C. Create an AWS Glue crawler and custom classifier based on the data feed formats and build a table definition to match. Trigger an AWS Lambda function on file delivery to start an AWS Glue ETL job to transform the entire record according to the processing and transformation requirements. Define the output format as JSON. Once complete, have the ETL job send the results to another S3 bucket for internal processing.
upvoted 1 times
...
AzureDP900
3 years, 5 months ago
c is correct You can use a Glue crawler to populate the AWS Glue Data Catalog with tables. The Lambda function can be triggered using S3 event notifications when object create events occur. The Lambda function will then trigger the Glue ETL job to transform the records masking the sensitive data and modifying the output format to JSON. This solution meets all requirements.
upvoted 1 times
...
AzureDP900
3 years, 5 months ago
C is correct
upvoted 1 times
...
acloudguru
3 years, 5 months ago
Selected Answer: C
https://aws.amazon.com/glue/
upvoted 2 times
...
andylogan
3 years, 6 months ago
It's C
upvoted 1 times
...
tgv
3 years, 6 months ago
CCC ---
upvoted 1 times
...
WhyIronMan
3 years, 6 months ago
I'll go with C
upvoted 2 times
...
mustpassla
3 years, 6 months ago
D, a use case of Glue crawler.
upvoted 2 times
...
Waiweng
3 years, 6 months ago
it's C
upvoted 2 times
...
KnightVictor
3 years, 6 months ago
going with C
upvoted 2 times
...
eji
3 years, 6 months ago
i think D
upvoted 1 times
...
wasabidev
3 years, 6 months ago
C for me
upvoted 1 times
...
Kian1
3 years, 6 months ago
I will go with C
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago