exam questions

Exam DP-201 All Questions

View all questions & answers for the DP-201 exam

Exam DP-201 topic 4 question 6 discussion

Actual exam question from Microsoft's DP-201
Question #: 6
Topic #: 4
[All DP-201 Questions]

You have a line-of-business (LOB) app that reads files from and writes files to Azure Blob storage in an Azure Storage account.
You need to recommend changes to the storage account to meet the following requirements:
Provide the highest possible availability.
Minimize potential data loss.
Which three changes should you recommend? Each correct answer presents part of the solution.
NOTE: Each correct selection is worth one point.

  • A. From the app, query the LastSyncTime of the storage account.
  • B. From the storage account, enable soft deletes.
  • C. From the storage account, enable read-access geo-redundancy storage (RA-GRS).
  • D. From the app, add retry logic to the storage account interactions.
  • E. From the storage account, enable a time-based retention policy.
Show Suggested Answer Hide Answer
Suggested Answer: BCE 🗳️
Soft delete protects blob data from being accidentally or erroneously modified or deleted. When soft delete is enabled for a storage account, blobs, blob versions
(preview), and snapshots in that storage account may be recovered after they are deleted, within a retention period that you specify.
Geo-redundant storage (with GRS or GZRS) replicates your data to another physical location in the secondary region to protect against regional outages.
However, that data is available to be read only if the customer or Microsoft initiates a failover from the primary to secondary region. When you enable read access to the secondary region, your data is available to be read if the primary region becomes unavailable. For read access to the secondary region, enable read-access geo-redundant storage (RA-GRS) or read-access geo-zone-redundant storage (RA-GZRS).
Reference:
https://docs.microsoft.com/en-us/azure/storage/blobs/soft-delete-overview https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy#read-access-to-data-in-the-secondary-region

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
m83x
Highly Voted 4 years, 10 months ago
The question is "You need to recommend *changes to the storage account* to meet the following requirements:", not changes to the app.. so it is BCE
upvoted 27 times
AJMorgan591
4 years, 8 months ago
I don't think so. That would be too easy.
upvoted 1 times
...
saponazureguy
4 years, 5 months ago
BCD should be the correct answer, no disagreement with other comments on B and C. I would go with D because during failover some data transactions can potentially be lost hence the best practice is to have a re-try logic. This is one of the requirements as well "Minimize potential data loss". E to me doesn't make sense since it has to do more with data lifecycle management than minimizing potential data loss.
upvoted 5 times
tes
3 years, 11 months ago
yes for that make changes to storage account(not to the app) cos that is the question about
upvoted 2 times
...
...
...
M0e
Highly Voted 4 years, 7 months ago
Am I the only one who says the correct answer is B, C, D?
upvoted 17 times
saponazureguy
4 years, 5 months ago
I agree, BCD should be the correct answer, no disagreement with other comments on B and C. I would go with D because as MOe pointed out, during failover some requests can potentially be lost hence the best practice is to have a re-try logic. This is one of the requirements as well "Minimize potential data loss". E to me doesnt make sense since it has to do more with data lifecycle management than minimizing potential data loss.
upvoted 2 times
tes
3 years, 11 months ago
yes for that make changes to storage account(not to the app) cos that is the question about
upvoted 2 times
...
...
...
lgtiza
Most Recent 3 years, 9 months ago
BCE is correct and not BCD. At first I also thought option E didn't make any sense, but it's not the "Lifecycle Management" feature (intended to move to cool or archive and then delete), this Time-Retention Policy is an access policy set up at container level to do exactly the opposite, to avoid deletes by setting up a time-retention period (and you can even lock those policies to comply with legal regulations). Definitely BCE.
upvoted 1 times
...
H_S
4 years, 2 months ago
Answer: BCE correct agree
upvoted 2 times
...
syu31svc
4 years, 6 months ago
Answer is correct RA-GRS for high availability soft delete and time retention for data loss protection
upvoted 8 times
...
brcdbrcd
4 years, 6 months ago
Time-based retention policy support: Users can set policies to store data for a specified interval. When a time-based retention policy is set, blobs can be created and read, but not modified or deleted. After the retention period has expired, blobs can be deleted but not overwritten. https://docs.microsoft.com/en-us/azure/storage/blobs/storage-blob-immutable-storage B,C,E
upvoted 7 times
...
rocksonroll
4 years, 7 months ago
I think the answers are right because of "Minimize potential data loss."
upvoted 2 times
...
AJMorgan591
4 years, 8 months ago
This question could be interpreted another way: "Minimize potential data loss if the primary region has an outage". The question already mentions high availability, and there's no mention of file deletion, so I interpret this question as being focused entirely on high availability. In which case the answer would be: A, C, D. Enable RA-GRS. If the primary goes down, check the LastSyncTime of the storage account, and if data was written to the primary after the LastSyncTime, it will have been lost. Therefore, add retry logic to the app for storage account interactions to ensure such data is eventually written successfully. https://docs.microsoft.com/en-us/azure/storage/common/last-sync-time-get I know the question says "changes to the storage account", and there are indeed three answers that involve changes to the storage account, but that's effectively giving the solution away, and common sense suggests Microsoft aren't going to make it that easy for you :)
upvoted 2 times
M0e
4 years, 7 months ago
Soft delete is aimed at "minimizing potential data loss". Last Sync Time query is not required since sending a request to an unavailable zone or region would fail anyway. The retry mechanism is to hold the application write requests until Azure re-directs them to the paired region when it becomes writable; as the result of automatic fail-over. Re-direction of the read requests happens instantaneously in case of an outage --- My answers would be B, C, D.
upvoted 9 times
saponazureguy
4 years, 5 months ago
I agree, BCD should be the correct answer, no disagreement with other comments on B and C. I would go with D because as MOe pointed out, during failover some requests can potentially be lost hence the best practice is to have a re-try logic. This is one of the requirements as well "Minimize potential data loss". E to me doesnt make sense since it has to do more with data lifecycle management than minimizing potential data loss.
upvoted 1 times
...
...
anurag1p
4 years, 6 months ago
Agree with you, the context of the question is based on reading and writing to the Storage account through app. There is no mention of deletion. Considering that, A,C,D looks logical.
upvoted 2 times
...
...
GeoffWright
4 years, 10 months ago
ABC - https://docs.microsoft.com/en-us/azure/storage/common/storage-redundancy#read-access-to-data-in-the-secondary-region
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...