exam questions

Exam AWS Certified Solutions Architect - Professional SAP-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Professional SAP-C02 exam

Exam AWS Certified Solutions Architect - Professional SAP-C02 topic 1 question 80 discussion

A company runs an IoT platform on AWS. IoT sensors in various locations send data to the company’s Node.js API servers on Amazon EC2 instances running behind an Application Load Balancer. The data is stored in an Amazon RDS MySQL DB instance that uses a 4 TB General Purpose SSD volume.

The number of sensors the company has deployed in the field has increased over time, and is expected to grow significantly. The API servers are consistently overloaded and RDS metrics show high write latency.

Which of the following steps together will resolve the issues permanently and enable growth as new sensors are provisioned, while keeping this platform cost-efficient? (Choose two.)

  • A. Resize the MySQL General Purpose SSD storage to 6 TB to improve the volume’s IOPS.
  • B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas.
  • C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data.
  • D. Use AWS X-Ray to analyze and debug application issues and add more API servers to match the load.
  • E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance.
Show Suggested Answer Hide Answer
Suggested Answer: CE 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
masetromain
Highly Voted 2 years, 5 months ago
Selected Answer: CE
C and E are the correct answers. Option C: Leveraging Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data would help to resolve the issues with the API servers being consistently overloaded. By using Kinesis, the data can be ingested and processed in real-time, allowing the API servers to handle the increased load. Using Lambda to process the data can also help to improve the overall performance and scalability of the platform. Option E: Re-architecting the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance would help to resolve the issues with high write latency. DynamoDB is a NoSQL database that is designed for high performance and scalability, making it a good fit for this use case. Additionally, DynamoDB supports auto-scaling, which can help to ensure that the database can handle the expected growth in the number of sensors.
upvoted 20 times
SuperP43
2 years, 3 months ago
I disagree with option E. Re-architecting the database tier from RDS to DynamoDB is not possible. RDS is a SQL database, and DynamoDB is a NoSQL database. The correct one should be C and B
upvoted 9 times
ajeeshb
1 year, 3 months ago
That is why it says to "Re-architect the DB tier".
upvoted 5 times
...
tromyunpak
2 years ago
if it was read operations yes but the issue is write latency. also rds proxy is used to handle the write operations
upvoted 2 times
tromyunpak
2 years ago
also rds proxy is not used (sorry typo) to handle write operations properly
upvoted 1 times
...
...
kamaro
2 years, 3 months ago
I agree with you. https://docs.aws.amazon.com/AmazonRDS/latest/AuroraUserGuide/CHAP_AuroraOverview.html Aurora can deliver up to five times the throughput of MySQL and up to three times the throughput of PostgreSQL without requiring changes to most of your existing applications. Aurora includes a high-performance storage subsystem. Its MySQL- and PostgreSQL-compatible database engines are customized to take advantage of that fast distributed storage. The underlying storage grows automatically as needed. An Aurora cluster volume can grow to a maximum size of 128 tebibytes (TiB).
upvoted 2 times
zejou1
2 years, 3 months ago
Naw, you can migrate: https://aws.amazon.com/blogs/big-data/near-zero-downtime-migration-from-mysql-to-dynamodb/ Plus, with DynamoDB it scales, don't need to add read replica complexity and it also supports IoT out of the box - https://docs.aws.amazon.com/amazondynamodb/latest/developerguide/SQLtoNoSQL.WhyDynamoDB.html This is for IoT sensors that send data and I don't need to store forever so, DynamoDB for this use case is better and cheaper allowing scale
upvoted 2 times
Sarutobi
2 years, 1 month ago
I think this is the big point in this question and that DynamoDB is being position by AWS for IoT very hard. Although is technically possible to migrate with DMS from SQL to DynamoDB, is hard, but harder yet is the change of model inside the application or service.
upvoted 1 times
...
...
...
...
OCHT
2 years, 2 months ago
While options C and E may also provide some benefits, they may not address the underlying issues with the overloaded API servers and high write latency in the database. Therefore, options B and D are the best combination for resolving the issues and enabling growth as new sensors are provisioned.
upvoted 1 times
...
masetromain
2 years, 5 months ago
Option A, Resizing the MySQL General Purpose SSD storage to 6 TB to improve the volume’s IOPS will not solve the problem, as the problem is not just related to storage size but also high write latency. Option B, Re-architecting the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and adding read replicas would help to improve the read performance, but it won't help in reducing write latency. Option D, Using AWS X-Ray to analyze and debug application issues and adding more API servers to match the load, would help in identifying the problem and resolving it, but it will not help in reducing the load on the servers.
upvoted 3 times
...
...
Kaps443
Most Recent 1 week, 3 days ago
Selected Answer: CE
C + E provide the most scalable, cost-efficient, and future-proof architecture for the company's IoT platform.
upvoted 1 times
...
eesa
1 month ago
Selected Answer: BD
B. Rediseñar a Amazon Aurora + réplicas de lectura ➔ ✅ Sí. Aurora es mucho más escalable y eficiente que MySQL RDS normal. Puedes crear réplicas de lectura automáticas para distribuir carga sin mucho esfuerzo manual. Permite crecimiento masivo de forma rentable porque Aurora gestiona réplicas y escalado de manera serverless si quieres (Aurora Serverless v2). C. Kinesis Data Streams + Lambda para ingesta ➔ ✅ Sí. Si los sensores envían muchísimos datos en tiempo real, meterlos directo en EC2+RDS satura todo. Kinesis puede recibir millones de eventos por segundo de forma masiva, almacenar temporalmente y procesarlos por lotes (batching) con Lambda, desacoplando la presión sobre tus APIs y la base de datos. Escala automáticamente y es muy rentable.
upvoted 1 times
...
Paul123456789
2 months, 2 weeks ago
Selected Answer: CE
A. will not fix the problem B. read replicas will not fix the high write latency D. is for debugging, not a solution This make it C and E
upvoted 1 times
...
hhiguita
2 months, 4 weeks ago
Selected Answer: BC
Write performance will be improved by switch RDS to Aurora. RDS to Aurora is smooth transition without too much on the application side. Answer E will application side not just backend DB.
upvoted 1 times
...
29fb203
3 months, 1 week ago
Selected Answer: BC
B. Re-architect the database tier to use Amazon Aurora and add read replicas Aurora automatically scales storage up to 128 TB without manual resizing. Faster writes and lower read latency than standard RDS MySQL. C. Use Amazon Kinesis Data Streams and AWS Lambda for ingestion and processing Decouples IoT data ingestion from database writes Kinesis Data Streams ingests large volumes of sensor data without overloading API servers. Scales automatically with the number of sensors. Not E becvause DynamoDB is NOSql and doesn't support MySQL.
upvoted 1 times
...
bhanus
5 months, 3 weeks ago
Selected Answer: BC
B. Re-architect the database tier to use Amazon Aurora instead of an RDS MySQL DB instance and add read replicas. Amazon Aurora is a managed database service compatible with MySQL, designed for high performance and scalability. Aurora provides better write performance and supports read replicas to handle increased read traffic as the platform grows. This will address the high write latency issue and enable horizontal scaling. C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data. Using Amazon Kinesis Data Streams for data ingestion offloads traffic from the API servers, reducing their load and improving scalability. AWS Lambda can process the raw data in real time and pass it to the database or other systems, providing a cost-effective and scalable solution for data processing.
upvoted 1 times
...
Heman31in
6 months, 1 week ago
Selected Answer: CE
By combining C (Kinesis + Lambda) with E (DynamoDB), you're preparing the platform to handle exponential growth in sensor data while ensuring high availability, scalability, and low latency for both data processing and storage. This solution directly addresses the need for a robust, future-proof architecture capable of supporting massive data volumes without bottlenecks, making it well-suited for the IoT platform's growth.
upvoted 1 times
...
wem
6 months, 1 week ago
Selected Answer: BC
E would require a shift from relational to a no-sql table - what if there are multiple tables?
upvoted 1 times
...
konieczny69
6 months, 3 weeks ago
Selected Answer: CE
C is straightforward. I go for E rather than B, because db shows heavy write latency, not limit. Replacing with Aurora will speed up thing up until a limit. Goal is to deal with it once and for all
upvoted 2 times
...
0b43291
7 months ago
Selected Answer: BC
By combining options B and C, the company can address the current performance and scalability issues while enabling future growth as more sensors are deployed. Amazon Aurora provides a scalable and high-performance relational database, while Kinesis Data Streams and Lambda offer a serverless and cost-effective solution for ingesting and processing the raw data streams. Option A may provide temporary relief by increasing IOPS, but it doesn't address the scalability and performance limitations of RDS MySQL. Option D can help identify application issues but doesn't solve the underlying database problems. Option E is not ideal as DynamoDB is a NoSQL database, and the existing application is likely designed for a relational database like MySQL or Aurora, requiring significant changes to the application code and data modeling.
upvoted 2 times
...
amministrazione
9 months, 3 weeks ago
C. Leverage Amazon Kinesis Data Streams and AWS Lambda to ingest and process the raw data. E. Re-architect the database tier to use Amazon DynamoDB instead of an RDS MySQL DB instance.
upvoted 1 times
...
zolthar_z
11 months ago
Selected Answer: CE
What discards B is "Add read replicas", the problem is writing the new data in the DB, adding Read replicas will increase the cost and this is not what question requests "maintain cost"
upvoted 2 times
...
Helpnosense
1 year ago
Selected Answer: BC
Write performance will be improved by switch RDS to Aurora. RDS to Aurora is smooth transition without too much on the application side. Answer E will application side not just backend DB.
upvoted 2 times
...
TonytheTiger
1 year, 2 months ago
Selected Answer: CE
Option CE and BC. The only reason I choose E over B because said SO. Per AWS, DynamoDB is suitable for IoT ( Sensor data and log ingestion) https://docs.aws.amazon.com/whitepapers/latest/best-practices-for-migrating-from-rdbms-to-dynamodb/suitable-workloads.html
upvoted 3 times
...
gofavad926
1 year, 3 months ago
Selected Answer: CE
CE, kinesis + lambda & Dynamodb
upvoted 1 times
...
a54b16f
1 year, 3 months ago
Selected Answer: BC
Switching from RDS mysql to aurora will improve performance, by up to 10 times, which could solve the write issue. Switching from relationship database to nosql is not practical, need re-engineering whole application. plus, the performance improvement of nosql are around data read, not data write ( creating/updating indexes is a huge effort)
upvoted 2 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...