ONTAP Discussions

Using RAID-DP to pare down primary storage?

Sam_Moulton
2,835 Views

A recent Storage Magazine article explores the use of technologies other than deduplication that can be leveraged to pare down primary storage ( http://searchstorage.techtarget.com/magazineContent/Pare-down-primary-storage ).  Larry Freeman, NetApp's own DrDedupe, had a chance to weigh in on the discussion:  "Unlike most of our competitors, we can do RAID-DP [NetApp's implementation of RAID 6] with only 5% overhead."  Are you taking advantage of RAID-DP in your environment?  Alone or in conjunction with deduplication or some other approach?  Pros / cons?

1 ACCEPTED SOLUTION

shaunjurr
2,835 Views

Hi,

I think raid-dp and a-sis/dedupe are, at best, tangental.  Raid-dp is a parity protection mechanism for data.  'sis' identifies duplicate blocks (although there are checksum calculations in de-duplication as well) and moves pointers and removes duplicate blocks from the file system. Not being part of the larger discussion you refer to makes it difficult to understand the angle you are coming from, however.

'sis' is a subset of WAFL functionality.  WAFL can do "de-dupe" just as well with raid4 and probably with raid0 you just lose some of the protection you have against disk/hardware failures affecting data integrity.  Comparing results of compression of other vendors like the former Data Domain that used tons of CPU cores for compression with a-sis deduplication (and compression) would be an interesting discussion. Even the total cost of the disk infrastructure for a given data set to reach a certain level of saving from de-dupe or compression (or both where possible) would be interesting as far as "paring down" primary storage.

RAID overhead is sort of on the fringe of such discussions, however, even if it is important to "get it out" that WAFL is optimized for raid4 and raid-dp raid types.

🙂

View solution in original post

1 REPLY 1

shaunjurr
2,836 Views

Hi,

I think raid-dp and a-sis/dedupe are, at best, tangental.  Raid-dp is a parity protection mechanism for data.  'sis' identifies duplicate blocks (although there are checksum calculations in de-duplication as well) and moves pointers and removes duplicate blocks from the file system. Not being part of the larger discussion you refer to makes it difficult to understand the angle you are coming from, however.

'sis' is a subset of WAFL functionality.  WAFL can do "de-dupe" just as well with raid4 and probably with raid0 you just lose some of the protection you have against disk/hardware failures affecting data integrity.  Comparing results of compression of other vendors like the former Data Domain that used tons of CPU cores for compression with a-sis deduplication (and compression) would be an interesting discussion. Even the total cost of the disk infrastructure for a given data set to reach a certain level of saving from de-dupe or compression (or both where possible) would be interesting as far as "paring down" primary storage.

RAID overhead is sort of on the fringe of such discussions, however, even if it is important to "get it out" that WAFL is optimized for raid4 and raid-dp raid types.

🙂

Public