exam questions

Exam SY0-501 All Questions

View all questions & answers for the SY0-501 exam

Exam SY0-501 topic 1 question 257 discussion

Actual exam question from CompTIA's SY0-501
Question #: 257
Topic #: 1
[All SY0-501 Questions]

The data backup window has expanded into the morning hours and has begun to affect production users. The main bottleneck in the process is the time it takes to replicate the backups to separate severs at the offsite data center.
Which of the following uses of deduplication could be implemented to reduce the backup window?

  • A. Implement deduplication at the network level between the two locations
  • B. Implement deduplication on the storage array to reduce the amount of drive space needed
  • C. Implement deduplication on the server storage to reduce the data backed up
  • D. Implement deduplication on both the local and remote servers
Show Suggested Answer Hide Answer
Suggested Answer: B 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Ales
Highly Voted 5 years, 7 months ago
Correct answer: B. Implement deduplication on the storage array to reduce the amount of drive space needed, Data deduplication -- often called intelligent compression or single-instance storage -- is a process that eliminates redundant copies of data and reduces storage overhead. Data deduplication techniques ensure that only one unique instance of data is retained on storage media, such as disk, flash or tape. Storage arrays are simply powerful computers which have large amounts of storage connected to them. Storage arrays are configured in such a way that they can present storage to multiple servers, typically over a dedicated network.
upvoted 24 times
...
Basem
Highly Voted 5 years, 9 months ago
Anyone has any clue about this question ? I have no idea what it is asking for.
upvoted 13 times
who__cares123456789___
4 years, 4 months ago
The main bottleneck in the process is (the time it takes to replicate the backups to separate severs) at the offsite data center....Now remove excess reasoning included in the statement and what are we left with?? The main bottleneck in the process is at the offsite data center...B final answer!!
upvoted 1 times
...
...
missy102
Most Recent 4 years, 6 months ago
Deduplication is the process of removing duplicate entries. As an example, imagine 10 users receive the same email and choose to save it. An email server using deduplication processing will keep only one copy of this email, but make it accessible to all 10 users. from Darril Gibson, Get certified get ahead
upvoted 2 times
missy102
4 years, 6 months ago
Hence, B is the answer.
upvoted 2 times
...
...
sunsun
4 years, 7 months ago
"replicate the backups to separate severs" mean data will replicate at server level, not at storage level, so the correct must be C
upvoted 1 times
...
CSSJ
4 years, 7 months ago
remember the ultimate destination which is data to reduce storage space
upvoted 1 times
...
Not_My_Name
4 years, 8 months ago
Is "B" talking about deduplicating the local storage array (SAN) or at the remote data center? If it's a local SAN, I can fully support the answer being 'B'. If it's a remote SAN, then the answer has to be 'C'.
upvoted 1 times
...
kentasmith
4 years, 9 months ago
They are not worried about saving space but cutting the back up time across the wire. Answer is C
upvoted 2 times
...
Dante_Dan
4 years, 9 months ago
The amount of drive space needed is ot the issue here. The question states that the main problem is the time it takes to transfer files from sites. So if we apply deduplication at a network level (using WAN accelerator technologies) this could go twice or even thrice faster.
upvoted 1 times
...
kdce
4 years, 11 months ago
B. Implement deduplication on the storage array - reduce data size, efficient compression
upvoted 2 times
...
CYBRSEC20
5 years ago
On further research I found that: There are two distinct methods of deduplication used for backup: Target-Based and Source-Based. Target-Based: Target-based deduplication employs a disk storage device as the data repository or target. The data is driven to the target using standard backup software. Once it reaches the device, the deduplication is processed as it enters the target (In-Line Processing), or it is received by the device in its raw data state and is processed after the entire backup job has arrived (Post-Process). There are two distinct methods of deduplication used for backup: Target-Based and Source-Based. Target-Based: Target-based deduplication employs a disk storage device as the data repository or target. The data is driven to the target using standard backup software. Once it reaches the device, the deduplication is processed as it enters the target (In-Line Processing), or it is received by the device in its raw data state and is processed after the entire backup job has arrived (Post-Process). In this context, I believe that C. is the best approach.
upvoted 2 times
...
GabrieleV
5 years ago
I'd go for A because it's not specified that they are backups are transferred using backup storage with block-level replication, so the only efficient (not so efficient TBH, but anyway better than nothing) way for both file-level and block-level replication it's on the network side. If you are deduplicating the source storage but transferring as file-level instead of block level, you won't gain anything fro deduplication on the transfer itself.
upvoted 1 times
...
MelvinJohn
5 years, 3 months ago
C. Not B: The question does not say that they are using storage arrays. The de-dup should occur prior to replication so that the smallest amount of data will be transmitted , taking the least amount of time. It says "The main bottleneck in the process is the time it takes to replicate the backups to separate severs at the offsite data center." So reduce the size of the data to be transmiited, then transmit it.
upvoted 3 times
MelvinJohn
5 years, 2 months ago
Correction - Implement deduplication on the server storage to reduce the data backed up. To reduce the data? No We need to reduce the size not the data. So answer B is correct.
upvoted 1 times
...
CYBRSEC20
5 years ago
You might be right after all. it is about reducing the data backed-up, not just the data so that the data deduplication should be at the server before it is replicated to remote sites.
upvoted 1 times
...
...
redondo310
5 years, 6 months ago
I use to work in storage and this one confused me, but after thinking about it a little more I understand why. My focus was typical backup technologies such as a rsync/robocopy or something similar. What they are not mentioning is block-level replication, not file level. Most major storage vendors allow you to do block-level replication, thus any deduplicated blocks would not get replicated like they would in a file-level replication.
upvoted 4 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...