exam questions

Exam AWS Certified Solutions Architect - Associate SAA-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Associate SAA-C02 exam

Exam AWS Certified Solutions Architect - Associate SAA-C02 topic 1 question 272 discussion

A company has a build server that is in an Auto Scaling group and often has multiple Linux instances running. The build server requires consistent and mountable shared NFS storage for jobs and configurations.
Which storage option should a solutions architect recommend?

  • A. Amazon S3
  • B. Amazon FSx
  • C. Amazon Elastic Block Store (Amazon EBS)
  • D. Amazon Elastic File System (Amazon EFS)
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️
Reference:
https://aws.amazon.com/efs/

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
lunamycat
Highly Voted 3 years, 8 months ago
D. EFS -> NFS
upvoted 39 times
nrd777
3 years, 7 months ago
thank you
upvoted 1 times
...
cnmc
3 years, 7 months ago
Hijacking this comment to say something about the "new questions" that are scattered in the comments below: most of them have been added to Examtopics's SA-C002 bank (ie the one that you are using), the others are for other AWS exams. Hugely appreciate the people that took time to post them but I realized I just wasted 1 hours looking through...
upvoted 15 times
swadeey
3 years, 7 months ago
Thanks Mate
upvoted 1 times
...
JackFrag
3 years, 7 months ago
Thanks mate.f
upvoted 1 times
...
...
...
FF11
Most Recent 3 years, 5 months ago
Selected Answer: D
D is correct.
upvoted 1 times
...
Cotter
3 years, 7 months ago
AAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA
upvoted 1 times
peterdabeast
3 years, 6 months ago
Needs mountable NFS. Only one that can do that is EFS
upvoted 2 times
...
...
Cotter
3 years, 7 months ago
I think that B : FSX.
upvoted 1 times
swadeey
3 years, 7 months ago
FSX is for windows. here linux instances
upvoted 3 times
Edgarrt
3 years, 5 months ago
fsx for lustre?
upvoted 2 times
pkhdog22
2 years, 9 months ago
FSx for Lustre, if the problem mentions like HC
upvoted 1 times
...
...
...
...
Kaps12
3 years, 7 months ago
New Question - A solution architect is designing a new workload in which AWS lambda function will access an amazon DynamoDB table. what is most secure means of granting the lambda function access to the DynamoDB table ? A. Create a IAM role with necessary permissions to access the DynamoDB. Assign the role to lambda function. B. Create Dynamo username and password and give them to developer to use in the lambda function. C. Create an IAM user and create access and secret keys for the user. Give the user necessary permissions to access DynamoDB table D. Create a IAM role allowing access from AWS Lamba
upvoted 2 times
lollo1234
3 years, 7 months ago
A is more complete than D. exclude B and C
upvoted 1 times
...
pr
3 years, 7 months ago
A is more appropriate.
upvoted 1 times
...
...
syu31svc
3 years, 7 months ago
This is D 101%
upvoted 3 times
...
CountryGent
3 years, 8 months ago
D indeed
upvoted 3 times
...
sa_the_cool
3 years, 8 months ago
NEW QUESTION A company is deploying an application that processes large quantities of data in batches as needed. The company plans to use Amazon EC2 instances for the workload. The network architecture must support a highly scalable solution and prevent groups of nodes from sharing the same underlying hardware. Which combination of network solutions will meet these requirements? (Select TWO.) A. Create Capacity Reservations for the EC2 instances to run in a placement group. B. Run the EC2 instances in a spread placement group. C. Run the EC2 instances in a cluster placement group. D. Place the EC2 instances in an EC2 Auto Scaling group. E. Run the EC2 instances in a partition placement group.
upvoted 2 times
Jehan
3 years, 7 months ago
A and B
upvoted 2 times
francisco_guerra
3 years, 7 months ago
Spread Placement strictly places a small group of instances across distinct underlying hardware to reduce correlated failures. Like says group some of them are on the same underlying hardware so partition is the correct one
upvoted 1 times
naveenagurjara
2 years, 11 months ago
That's Partition placement that you are describing.
upvoted 1 times
...
...
...
mahdeo01
3 years, 7 months ago
Answer for above question is : Spread Placement & Partition Placement as per the defination given in this document >> https://docs.aws.amazon.com/AWSEC2/latest/UserGuide/placement-groups.html
upvoted 2 times
...
soundarya_vs
3 years, 7 months ago
i think D&E in spread we can place max 7 instances per AZ but for highly scalable i think partition will be right
upvoted 1 times
...
Atanu_M
3 years, 7 months ago
Ans - D& E D - to address highly scalable part E - Partition placemet group - a group of nodes doesn't share same underlying rack / hardware
upvoted 11 times
waqas
3 years, 7 months ago
Why not Spread?
upvoted 1 times
Sallywhite
3 years, 7 months ago
Spread Placement Groups are described as having individual instances all on separate hardware. ... In contrast, Partition Placement Groups are described as large groups of instances where each group is placed on separate hardware. Each partition comprises multiple instances.
upvoted 5 times
pr
3 years, 7 months ago
D- for scaling E- partition placement group. B: Cannot be this because qs asks "prevent groups of nodes from sharing the same underlying hardware., partition groups satisfies this
upvoted 4 times
lehoang15tuoi
3 years, 7 months ago
Sorry I mean Partition and Autoscaling. The questions says "group" so has to be Partition
upvoted 1 times
...
...
...
...
...
...
sa_the_cool
3 years, 8 months ago
NEW QUESTION A company has several web servers that need to frequently access a common Amazon RDS MySQL Multi-AZ DB instance. The company wants a secure method for the web servers to connect to the database while meeting a security requirement to rotate user credentials frequently. Which solution meets these requirements? A. Store the database user credentials in AWS Secrets Manager. Grant the necessary IAM permissions to allow the web servers to access AWS Secrets Manager. B. Store the database user credentials in AWS Systems Manager OpsCenter. Grant the necessary IAM permissions to allow the web servers to access OpsCenter. C. Store the database user credentials in a secure Amazon S3 bucket. Grant the necessary IAM permissions to allow the web servers to retrieve credentials and access the database. D. Store the database user credentials in files encrypted with AWS Key Management Service (AWS KMS) on the web server file system. The web server should be able to decrypt the files and access the database.
upvoted 3 times
VincentZhang
3 years, 7 months ago
Ans A, it is about credentials not encryption key so D opt out
upvoted 1 times
...
Balki
3 years, 7 months ago
I got this one in my Professional exam. Answer is A.
upvoted 1 times
...
Lila2A
3 years, 7 months ago
A : You can configure AWS Secrets Manager to automatically rotate the secret for a secured service or database. Secrets Manager natively knows how to rotate secrets for supported Amazon RDS databases.
upvoted 3 times
...
...
sa_the_cool
3 years, 8 months ago
NEW QUESTION A solutions architect is designing an architecture that includes web, application, and database tiers. The web tier must be capable of auto scaling. The solutions architect has decided to separate each tier into its own subnets. The design includes two public subnets and four private subnets. The security team requires that tiers be able to communicate with each other only when there is a business need and that all other network traffic be blocked. What should the solutions architect do to meet these requirements? A. Create an Amazon Guard Duty source/destination rule set to control communication. B. Create one security group for all tiers to limit traffic to only the required source and destinations. C. Create specific security groups for each tier to limit traffic to only the required source and destinations. D. Create network ACLs in all six subnets to limit traffic to the sources and destinations required for the application to function.
upvoted 2 times
VincentZhang
3 years, 7 months ago
I will choose D of NACL as it mentioned to block network traffic
upvoted 7 times
lehoang15tuoi
3 years, 7 months ago
Correct. To add to this answer, notice that the questions already says: "The solutions architect has decided to separate each tier into its own subnets. The design includes two public subnets and four private subnets." In this case, applying SG is going to take more time and is more complicated
upvoted 1 times
...
...
Atanu_M
3 years, 7 months ago
C: NACL is for restricting specific traffic (an IP or CIDR range) where as Security group is to allow specific traffic paricularly traffic from instances with same security group. Here you want 1. Web SG to accepts traffic from 80/443 from all IP and then application SG will accepts traffic from Web only and DB will accepts traffic from App SG only
upvoted 6 times
...
...
elvancedonzy
3 years, 8 months ago
287. A solutions architect is designing a new API using Amazon API Gateway that will receive requests from users. The volume of requests is highly variable; several hours can pass without receiving a single request. The data processing will take place asynchronously, but should be completed within a few seconds after a request is made. Which compute service should the solutions architect have the API invoke to deliver the requirements at the lowest cost? A. An AWS Glue job B. An AWS Lambda function C. A containerized service hosted in Amazon Elastic Kubernetes Service (Amazon EKS) D. A containerized service hosted in Amazon ECS with Amazon EC2
upvoted 2 times
93madox
3 years, 8 months ago
B is probably right choice here
upvoted 12 times
lehoang15tuoi
3 years, 7 months ago
It's definitely the right choice here
upvoted 2 times
...
...
...
elvancedonzy
3 years, 8 months ago
A company has created a multi-tier application for its ecommerce website. The website uses an Application Load Balancer that resides in the public subnets, a web tier in the public subnets, and a MySQL cluster hosted on Amazon EC2 instances in the private subnets. The MySQL database needs to retrieve product catalog and pricing information that is hosted on the internet by a third-party provider. A solutions architect must devices a strategy that maximizes security without increasing operational overhead. What should the solutions architect do to meet these requirements? A. Deploy a NAT instance in the VPC. Route all the internet-based traffic through the NAT instance. B. Deploy a NAT gateway in the public subnets. Modify the private subnet route table to direct all internet-bound traffic to the NAT gateway. C. Configure an internet gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the internet gateway. D. Configure a virtual private gateway and attach it to the VPC. Modify the private subnet route table to direct internet-bound traffic to the virtual private gateway.
upvoted 2 times
93madox
3 years, 8 months ago
B - NAT Gateway, as we want the private subnet servers to initiate connection when its needed.
upvoted 12 times
...
...
elvancedonzy
3 years, 8 months ago
A solutions architect is designing the cloud architecture for a company that needs to host hundreds of machine learning models for its users. During startup, the models need to load up to 10 GB of data from Amazon S3 into memory, but they do not need disk access. Most of the models are used sporadically, but the users expect all of them to be highly available and accessible with low latency. Which solution meets the requirements and is MOST cost-effective? A. Deploy models as AWS Lambda functions behind an Amazon API Gateway for each model. B. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind an Application Load Balancer for each model. C. Deploy models as AWS Lambda functions behind a single Amazon API Gateway with path-based routing where one path corresponds to each model. D. Deploy models as Amazon Elastic Container Service (Amazon ECS) services behind a single Application Load Balancer with path-based routing where one path corresponds to each model.
upvoted 2 times
margz
3 years, 8 months ago
I think C
upvoted 5 times
lehoang15tuoi
3 years, 7 months ago
This question was posted in Jan 21 and Lambda had only increased to 10GB of memory in December 20, so I figure it wasn't written with Lambda in mind. Running hundreds of models on Lambda that use 10GB RAM is going to cost a lot, most likely more so than ECS Fargate. In practice, I have rarely (if ever) seen any company run ML models on Lambda. It's doable, but unlikely. The 15 mins run time is too limiting. I'm also quite sure that no companies want to write "hundreds of models" in Lambda. The amount of money you're going to spend on developers effort is going to be much more than any saving you could be looking at
upvoted 1 times
...
...
naveenagurjara
2 years, 11 months ago
D for sure
upvoted 1 times
...
...
elvancedonzy
3 years, 8 months ago
A company has an ecommerce application that stores data in an on-premises SQL database. The company has decided to migrate this database to AWS. However, as part of the migration, the company wants to find a way to attain sub-millisecond responses to common read requests. A solutions architect knows that the increase in speed is paramount and that a small percentage of stale data returned in the database reads is acceptable. What should the solutions architect recommend? A. Build Amazon RDS read replicas. B. Build the database as a larger instance type. C. Build a database cache using Amazon ElastiCache. D. Build a database cache using Amazon Elasticsearch Service (Amazon ES).
upvoted 3 times
margz
3 years, 8 months ago
Agree, it's C
upvoted 2 times
...
qurren
3 years, 8 months ago
C for sure
upvoted 8 times
...
...
elvancedonzy
3 years, 8 months ago
A company is developing an ecommerce application that will consist of a load-balanced front end, a container-based application, and a relational database. A solutions architect needs to create a highly available solution that operates with as little manual intervention as possible. Which solutions meet these requirements? (Choose two.) A. Create an Amazon RDS DB instance in Multi-AZ mode. B. Create an Amazon RDS DB instance and one or more replicas in another Availability Zone. C. Create an Amazon EC2 instance-based Docker cluster to handle the dynamic application load. D. Create an Amazon Elastic Container Service (Amazon ECS) cluster with a Fargate launch type to handle the dynamic application load. E. Create an Amazon Elastic Container Service (Amazon ECS) cluster with an Amazon EC2 launch type to handle the dynamic application load.
upvoted 2 times
qurren
3 years, 8 months ago
I will choose A and D
upvoted 15 times
VincentZhang
3 years, 7 months ago
it is not a combination ans, I choose C and D
upvoted 2 times
...
Kaps12
3 years, 7 months ago
Why not B & D ?
upvoted 1 times
lehoang15tuoi
3 years, 7 months ago
Because "as little manual intervention as possible". B takes more management efforts than A
upvoted 1 times
...
ask2
3 years, 7 months ago
multi AZ provides high availability and read replica improve performance
upvoted 2 times
...
...
...
...
sa_the_cool
3 years, 8 months ago
Q-131 A company is using a fleet of Amazon EC2 instances to ingest data from on-premises data sources. The data is in JSON format and ingestion rates can be as high as 1 MB/s. When an EC2 instance is rebooted, the data in-flight is lost. The company's data science team wants to query ingested data in near-real time. Which solution provides near-real-time data querying that is scalable with minimal data loss? A. Publish data to Amazon Kinesis Data Streams. Use Kinesis Data Analytics to query the data. B. Publish data to Amazon Kinesis Data Firehose with Amazon Redshift as the destination. Use Amazon Redshift to query the data. C. Store ingested data in an EC2 instance store. Publish data to Amazon Kinesis Data Firehose with Amazon S3 as the destination. Use Amazon Athena to query the data. D. Store ingested data in an Amazon Elastic Block Store (Amazon EBS) volume. Publish data to Amazon ElastiCache for Redis. Subscribe to the Redis channel to query the data.
upvoted 3 times
dmscountera
3 years, 7 months ago
With Kinesis Data Analytics, you can process and query real-time, streaming data. You use standard SQL to process your data streams, so you don’t have to learn any new programming languages. You just point Kinesis Data Analytics to an incoming data stream, write your SQL queries, and then specify where you want the results loaded. Kinesis Data Analytics uses the KCL to read data from streaming data sources as one part of your underlying application. The service abstracts this from you, as well as many of the more complex concepts associated with using the KCL, such as checkpointing. A
upvoted 3 times
occupatissimo
3 years, 7 months ago
request is for near-real time, so firehose -> B
upvoted 4 times
kowal_001
3 years, 7 months ago
Amazon Kinesis Data Analytics provides built-in functions to filter, aggregate, and transform streaming data for advanced analytics. It processes streaming data with sub-second latencies, enabling you to analyze and respond to incoming data and events in real time.
upvoted 1 times
...
naveenagurjara
2 years, 11 months ago
Redshift cannot do real time analysis as KDA
upvoted 1 times
...
...
...
...
sa_the_cool
3 years, 8 months ago
Q-130 A company has an on-premises MySQL database used by the global sales team with infrequent access patterns. The sales team requires the database to have minimal downtime. A database administrator wants to migrate this database to AWS without selecting a particular instance type in anticipation of more users in the future. Which service should a solutions architect recommend? A. Amazon Aurora MySQL B. Amazon Aurora Serverless for MySQL C. Amazon Redshift Spectrum D. Amazon RDS for MySQL
upvoted 3 times
Miladsh
3 years, 7 months ago
bbbbbbbbbbb
upvoted 7 times
...
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...