exam questions

Exam CISSP All Questions

View all questions & answers for the CISSP exam

Exam CISSP topic 1 question 10 discussion

Actual exam question from ISC's CISSP
Question #: 10
Topic #: 1
[All CISSP Questions]

An organization has been collecting a large amount of redundant and unusable data and filling up the storage area network (SAN). Management has requested the identification of a solution that will address ongoing storage problems. Which is the BEST technical solution?

  • A. Compression
  • B. Caching
  • C. Replication
  • D. Deduplication
Show Suggested Answer Hide Answer
Suggested Answer: D 🗳️

Comments

Chosen Answer:
This is a voting comment (?). It is better to Upvote an existing comment if you don't have anything to add.
Switch to a voting comment New
Tanzy360
Highly Voted 2 years, 7 months ago
Selected Answer: D
D is the only answer choice that makes sense with the excess data
upvoted 10 times
...
franbarpro
Highly Voted 2 years, 7 months ago
Selected Answer: D
"D" it is. Data deduplication is a process that eliminates excessive copies of data and significantly decreases storage capacity requirements. Deduplication can be run as an inline process as the data is being written into the storage system and/or as a background process to eliminate duplicates after the data is written to disk. https://www.netapp.com/data-management/what-is-data-deduplication/#:~:text=Data%20deduplication%20is%20a%20process,data%20is%20written%20to%20disk.
upvoted 7 times
...
Da_xpert
Most Recent 1 week, 4 days ago
Selected Answer: D
Answer is D: The important part of the question here is "Redundant and unusable data".
upvoted 1 times
...
kurili
2 weeks, 2 days ago
Selected Answer: D
The problem described is "a large amount of redundant and unusable data" filling up storage. Deduplication is a storage optimization technique that eliminates redundant copies of data by storing only unique instances of data blocks or files and replacing duplicates with pointers to the original. This directly addresses storage inefficiency caused by redundant data, reducing overall storage consumption on the SAN. 📌 Why the others aren’t ideal: A. Compression → Reduces the size of data but doesn’t remove redundancy. It works on individual files or data streams, not across multiple copies of the same file. B. Caching → Temporarily stores frequently accessed data for performance, not for reducing storage footprint. C. Replication → Actually increases storage use by copying data to other locations for redundancy and availability — the opposite of what’s needed here.
upvoted 1 times
...
amitsir
1 month, 1 week ago
Selected Answer: D
D is more accurate
upvoted 1 times
...
Skynet08
3 months, 3 weeks ago
Selected Answer: D
the question mentions "redundant" which indicates the answer will be D
upvoted 1 times
...
Rider2053
4 months, 3 weeks ago
Selected Answer: D
The data deduplication process systematically eliminates redundant copies of data and files, which can help reduce storage costs and improve version control. In an era when every device generates data and entire organizations share files, data deduplication is a vital part of IT operations.
upvoted 1 times
...
Moose01
5 months ago
Selected Answer: D
I need to slow down and read it. it is De-duplication not Duplication, Jesus what a trap.
upvoted 4 times
...
Eltooth
7 months, 1 week ago
Selected Answer: D
D is correct answer. Redundant can mean multiple (think redundant systems) so if you have multiple versions of the data then dedup would reduce these copies to one main and multiple stubs. Yes there would be a hit on CPU performance once dedup is run for the first time, however long term this speeds up space saving when new (redundant) data is added. Compression would reference each redundant bit/byte and have pointers to each, filling up the master index record and adding processing overhead each time data was added, searched for or retrieved.
upvoted 2 times
...
Ezebuike
8 months, 2 weeks ago
Assuming you have a very large file on your desktop and is occupying much storage space, you can zip up the folder and the size of the file will reduce. What dose that mean? you are compressing the file. That same logic can be applied to this quest. Thus, the correct and is A. Compression
upvoted 2 times
...
3NO5
12 months ago
D is the best answer Deduplication is the best solution for managing excess data, even if it's not just duplicates. It helps remove redundant and unneeded data efficiently.
upvoted 1 times
...
dm808
1 year, 1 month ago
Selected Answer: A
Deduplication doesnt address unusable data.. so it has to be compression, A
upvoted 1 times
dm808
1 year, 1 month ago
and "redundant" can also mean "unnecessary" as well as "duplicate"
upvoted 3 times
...
...
Kyanka
1 year, 1 month ago
Selected Answer: D
D is pretty much the "text book" answer for this question.
upvoted 1 times
...
andyprior
1 year, 2 months ago
Selected Answer: A
Deduplication is effective in organizations that have a lot of redundant data, such as backup systems that have several versions of the same file. Compression is effective in decreasing the size of unique files, such as images, videos, and databases
upvoted 1 times
...
andyprior
1 year, 2 months ago
Deduplication is effective in organizations that have a lot of redundant data, such as backup systems that have several versions of the same file. Compression is effective in decreasing the size of unique files, such as images, videos, and databases
upvoted 2 times
...
DragonHunter40
1 year, 2 months ago
I say the answer is A. The question isn't talking about getting rid of the data, and 9 times out of 10, no one is going to go through large amounts of data to see what's a duplicate. Not to mention, you wouldn't know what to keep or delete. A "Compression" is the simplest answer.
upvoted 1 times
...
Bright07
1 year, 2 months ago
D is the answer. Although both A and D answer look similar. This is simple explanation for both answers. A storage area network (SAN) or storage network is a computer network which provides access to consolidated, block-level data storage. deduplication commonly occurs at the block level; however, compression generally occurs at the file level. Now the difference is that deduplication occurs at the block level according to the question while compression occurs at the file level. so answer is Deduplication.
upvoted 3 times
...
Community vote distribution
A (35%)
C (25%)
B (20%)
Other
Most Voted
A voting comment increases the vote count for the chosen answer by one.

Upvoting a comment with a selected answer will also increase the vote count towards that answer by one. So if you see a comment that you already agree with, you can upvote it instead of posting a new comment.

SaveCancel
Loading ...
exam
Someone Bought Contributor Access for:
SY0-701
London, 1 minute ago