exam questions

Exam AWS Certified Solutions Architect - Professional All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional exam

Exam AWS Certified Solutions Architect - Professional topic 1 question 433 discussion

A company runs a video processing platform. Files are uploaded by users who connect to a web server, which stores them on an Amazon EFS share. This web server is running on a single Amazon EC2 instance. A different group of instances, running in an Auto Scaling group, scans the EFS share directory structure for new files to process and generates new videos (thumbnails, different resolution, compression, etc.) according to the instructions file, which is uploaded along with the video files. A different application running on a group of instances managed by an Auto Scaling group processes the video files and then deletes them from the
EFS share. The results are stored in an S3 bucket. Links to the processed video files are emailed to the customer.
The company has recently discovered that as they add more instances to the Auto Scaling Group, many files are processed twice, so image processing speed is not improved. The maximum size of these video files is 2GB.
What should the Solutions Architect do to improve reliability and reduce the redundant processing of video files?

  • A. Modify the web application to upload the video files directly to Amazon S3. Use Amazon CloudWatch Events to trigger an AWS Lambda function every time a file is uploaded, and have this Lambda function put a message into an Amazon SQS queue. Modify the video processing application to read from SQS queue for new files and use the queue depth metric to scale instances in the video processing Auto Scaling group.
  • B. Set up a cron job on the web server instance to synchronize the contents of the EFS share into Amazon S3. Trigger an AWS Lambda function every time a file is uploaded to process the video file and store the results in Amazon S3. Using Amazon CloudWatch Events, trigger an Amazon SES job to send an email to the customer containing the link to the processed file.
  • C. Rewrite the web application to run directly from Amazon S3 and use Amazon API Gateway to upload the video files to an S3 bucket. Use an S3 trigger to run an AWS Lambda function each time a file is uploaded to process and store new video files in a different bucket. Using CloudWatch Events, trigger an SES job to send an email to the customer containing the link to the processed file.
  • D. Rewrite the web application to run from Amazon S3 and upload the video files to an S3 bucket. Each time a new file is uploaded, trigger an AWS Lambda function to put a message in an SQS queue containing the link and the instructions. Modify the video processing application to read from the SQS queue and the S3 bucket. Use the queue depth metric to adjust the size of the Auto Scaling group for video processing instances.
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️
Reference:
https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues.html

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
chaudh
Highly Voted 3 years, 7 months ago
D is my choice. A & B are incorrect: Web is placed on single EC2 that is not HA, hosting the web on S3 will help to improve the reliability. C: lambda function should not be used ot process the video, it's suitable for short execution. D is best choice and SQS contains link of S3 with instruction is applied a lot in real world.
upvoted 29 times
AWS2020
3 years, 7 months ago
i think lambda can be used to process the video. to wont take more than 15mins to process a video. D is not correct, it may replicate and the req. says clearly that we need to remove the redundancy
upvoted 1 times
newme
3 years, 6 months ago
not remove but reduce.
upvoted 1 times
...
...
9Ow30
3 years, 7 months ago
I also vote for D It has all the things as standard practice. Keep data in S3/ pointer in SQS. Process data via lambda on trigger and use SQS queue length as a Auto scaling driver.
upvoted 3 times
...
Sunflyhome
3 years, 6 months ago
Lambda has 5 minutes timeout. Video processing is a time-cost process. No way to use Lambda to do de-coding or en-coding of video file (except it's very small like a couple of hundreds MB). D is a better than A in term of HA, Let AWS handle s3 stability than a single instance in A.
upvoted 1 times
Jupi
3 years, 5 months ago
The Lambda function is not for video processing, it just trigger the video processing. So timeout shouldn't be an issue.
upvoted 2 times
...
...
SD13
3 years, 6 months ago
D is missing the part "How the lambda is triggered?" and why do we rewrite the app? Just app modification would be fine. Going with A
upvoted 3 times
sarah_t
3 years, 6 months ago
Lambda can be triggered directly from S3 when a file is uploaded. A has the web app on one single EC2 instance. Hardly a good architecture.
upvoted 3 times
...
...
...
Frank1
Highly Voted 3 years, 7 months ago
C is incorrect. API gateway cannot be used to upload the 2GB video file to S3 as "API Gateway supports a reasonable payload size limit of 10MB." https://sookocheff.com/post/api/uploading-large-payloads-through-api-gateway/ I support D
upvoted 12 times
superuser784
2 years, 6 months ago
Good point, I also ruled out the API Gateway because it can last only 29 seconds, and not all internet connections are that fast enough to upload 2GB within that timeframe.
upvoted 1 times
...
...
mnsait
Most Recent 4 months, 3 weeks ago
Selected Answer: C
The question calls out that "many files are processed twice". SQS does not commit on avoiding duplication. An issue with A, B and D is all of these use SQS. While C avoids SQS, it uses Lambda to process and store new videos and hence not the right solution either unless the videos are quite small. Given the trade off, I choose C since the info on the size of videos is not mentioned.
upvoted 1 times
...
maxh8086
2 years, 3 months ago
https://youtu.be/rCTBXrV3EY8 D
upvoted 1 times
maxh8086
2 years, 3 months ago
My Bad, its C
upvoted 1 times
...
...
hobokabobo
2 years, 4 months ago
None of the answers work. Try it: either no email(A,D) or still duplicate processing(B), or limits are breached(C), cloudwatch trigger not possible(A) or lambda is called magically without any trigger at all(D) . So might be A, C or D,
upvoted 1 times
...
LrdKanien
2 years, 5 months ago
B - Solution Architects don't rewrite applications.
upvoted 3 times
...
resnef
2 years, 5 months ago
difficult question, but I will choose A, this link helped in the choice: https://aws.amazon.com/blogs/media/processing-user-generated-content-using-aws-lambda-and-ffmpeg/
upvoted 1 times
...
Netaji
2 years, 5 months ago
SQS avoids the dup messages , if web site is static the " D" https://docs.aws.amazon.com/AWSSimpleQueueService/latest/SQSDeveloperGuide/FIFO-queues-exactly-once-processing.html
upvoted 1 times
...
mrgreatness
2 years, 6 months ago
It's D for me
upvoted 1 times
...
AzureDP900
3 years, 4 months ago
I'll go with D
upvoted 1 times
...
cldy
3 years, 4 months ago
D. Rewrite the web application to run from Amazon S3 and upload the video files to an S3 bucket. Each time a new file is uploaded, trigger an AWS Lambda function to put a message in an SQS queue containing the link and the instructions. Modify the video processing application to read from the SQS queue and the S3 bucket. Use the queue depth metric to adjust the size of the Auto Scaling group for video processing instances.
upvoted 1 times
...
acloudguru
3 years, 5 months ago
Selected Answer: D
use SQS for the de coupling and S3 to be the storage.
upvoted 2 times
...
andylogan
3 years, 5 months ago
It's D
upvoted 1 times
...
chand0401
3 years, 5 months ago
B is correct. A & D use SQS, in which redundant processing is possible. The main issue here is avoid redundant processing. C - Using API gateway to upload to S3 is unnecessary B is correct because - here you are avoiding to rewrite the application by simply syncing EFS files to S3 with a cron. There is no requirement to immediately process the files, so using cron is not a bad idea.
upvoted 4 times
...
tgv
3 years, 5 months ago
DDD ---
upvoted 1 times
...
Akhil254
3 years, 5 months ago
D Correct
upvoted 1 times
...
zolthar_z
3 years, 5 months ago
Is a really hard question, A & D looks good but I will go with D for one reason. Trigger a lambda using S3 events is easiest than configure cloudwatch event.
upvoted 3 times
student22
3 years, 5 months ago
Also, A is not HA - uses a single EC2 instance to host the web site.
upvoted 1 times
student22
3 years, 5 months ago
Agree with D
upvoted 2 times
...
...
vinodhg
3 years, 5 months ago
But you don't know if the website is static, so how sure are you to host it in S3?
upvoted 1 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago