exam questions

Exam AWS Certified Solutions Architect - Associate SAA-C02 All Questions

View all questions & answers for the AWS Certified Solutions Architect - Associate SAA-C02 exam

Exam AWS Certified Solutions Architect - Associate SAA-C02 topic 1 question 439 discussion

A company is using an Amazon S3 bucket to store data uploaded by different departments from multiple locations. During an AWS Well-Architected review, the financial manager notices that 10 TB of S3 Standard storage data has been charged each month. However, in the AWS Management Console for Amazon S3, using the command to select all files and folders shows a total size of 5 TB.
What are the possible causes for this difference? (Choose two.)

  • A. Some files are stored with deduplication.
  • B. The S3 bucket has versioning enabled.
  • C. There are incomplete S3 multipart uploads.
  • D. The S3 bucker has AWS Key Management Service (AWS KMS) enabled.
  • E. The S3 bucket has Intelligent-Tiering enabled.
Show Suggested Answer Hide Answer
Suggested Answer: BC 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
lovelyone
Highly Voted 3 years, 8 months ago
The answer is B & C According to multipart Multipart upload and pricing After you initiate a multipart upload, Amazon S3 retains all the parts until you either complete or stop the upload. Throughout its lifetime, you are billed for all storage, bandwidth, and requests for this multipart upload and its associated parts. If you stop the multipart upload, Amazon S3 deletes upload artifacts and any parts that you have uploaded, and you are no longer billed for them. For more information about pricing, see Amazon S3 pricing. https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-Ka9czoG6ryzzuyr6PFp/s3_versioning_costs
upvoted 26 times
francisco_guerra
3 years, 7 months ago
you say that the s3 parts will be deleted so it is not c
upvoted 3 times
...
...
cadim
Highly Voted 3 years, 8 months ago
Evidence for C While it is possible to manually list and abort incomplete multipart uploads in your S3 buckets, this can quickly become a cumbersome task as the number of uploads, buckets, and accounts within your organization increase. Also note that you aren’t able to view the parts of your incomplete multipart upload in the AWS Management Console. https://aws.amazon.com/blogs/aws-cost-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/
upvoted 22 times
pr
3 years, 8 months ago
Thanks for sharing the link. Useful Info. Based on how multi-part uploads and versioning are priced, it should be B and C.
upvoted 5 times
pr
3 years, 8 months ago
Versioning: https://acloud.guru/forums/aws-certified-solutions-architect-associate/discussion/-Ka9czoG6ryzzuyr6PFp/s3_versioning_costs Multi-part upload:https://aws.amazon.com/blogs/aws-cost-management/discovering-and-deleting-incomplete-multipart-uploads-to-lower-amazon-s3-costs/
upvoted 4 times
...
...
...
tigerbaer
Most Recent 2 years, 9 months ago
Selected Answer: BC
bc is correct guys, wake up
upvoted 3 times
...
Jh_k
3 years, 2 months ago
Selected Answer: BC
BC is the answer~!
upvoted 1 times
...
kitkwok
3 years, 3 months ago
BCBCBCBCBCBCBC
upvoted 3 times
...
LETSGETIT
3 years, 4 months ago
B,C is the answer.
upvoted 3 times
...
thamalaka
3 years, 6 months ago
whoever is saying Answer is A ,check what is deduplication first. Then you will identify why it is wrong.
upvoted 2 times
...
yottabyte
3 years, 7 months ago
A & B are correct. Guys, 10TB storage charged each month means its been in the account for a while and can't be multipart uploads. it has to be dedup and versioning.
upvoted 1 times
gargaditya
3 years, 6 months ago
What is deduplication feature,does it exist for S3?
upvoted 3 times
...
...
Azure1971
3 years, 7 months ago
Answer: B & C Q: How am I charged for using Versioning? Normal Amazon S3 rates apply for every version of an object stored or requested. For example, let’s look at the following scenario to illustrate storage costs when utilizing Versioning Q: Why would I use an S3 Lifecycle policy to expire incomplete multipart uploads? The S3 Lifecycle policy that expires incomplete multipart uploads allows you to save on costs by limiting the time non-completed multipart uploads are stored. For example, if your application uploads several multipart object parts, but never commits them, you will still be charged for that storage. This policy can lower your S3 storage bill by automatically removing incomplete multipart uploads and the associated storage after a predefined number of days. https://aws.amazon.com/s3/faqs/
upvoted 13 times
eBooKz
3 years, 1 month ago
That's how to make an argument. You submit your proof points with LINKs to validate them and you keep it simple and understandable. I wish I could upvote this more times than once.
upvoted 3 times
...
...
yeswanthnarra
3 years, 7 months ago
B and C. When versioning is enabled, console might not take all the versions into considerations unless you toggle the "ListVersions" options in console. It will only evaluate the current versions and ignore previous versions. With Multipart uploads, unless they are aborted or finished, users will continue to pay for the incomplete parts and they can't see the amount of data they are paying for as a result of MPU unless they enable storage lens.
upvoted 8 times
...
Always_Wanting_Stuff
3 years, 7 months ago
Deduplication refers to a method of eliminating a dataset's redundant data. In a secure data deduplication process, a deduplication assessment tool identifies extra copies of data and deletes them, so a single instance can then be stored. Data deduplication software analyzes data to identify duplicate byte patterns". The Answer cannot be A. I am going with B & C.
upvoted 5 times
...
LeBeano
3 years, 7 months ago
definitely not A. dedupe compresses files by ensuring repeated data is duplicated. B & C in my book!
upvoted 4 times
...
jealbave
3 years, 7 months ago
A & C is the correct.
upvoted 1 times
...
dumdumex
3 years, 7 months ago
B & C . versioning enabled and mult-part uploads. Thats why storing x2 size
upvoted 5 times
...
JasonJeon
3 years, 8 months ago
Answer is B & C.
upvoted 5 times
...
ExamExpert82
3 years, 8 months ago
I think A and B make more sense. In the case of the multipart upload, I don't think it is correct because it does not necessarily remain as a constant size 5TB of data, and if someone double-checks they can see the difference by change of time and date.
upvoted 4 times
DahMac
3 years, 7 months ago
From the console you can only see one region at a time. Perhaps Cross region replication Answer -A-, with B
upvoted 1 times
...
swadeey
3 years, 7 months ago
Does S3 do deduplication? 1 Answer. S3 does not expose any evidence of internal deduplication. If you were to upload 500 identical files of 1 GB each, you'd be billed for storing 500 GB
upvoted 1 times
gargaditya
3 years, 6 months ago
Exactly, there is no such thing as S3 deduplication.
upvoted 1 times
...
...
...
syu31svc
3 years, 8 months ago
B and C are the answers Other options don't make any sense
upvoted 11 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...