2010-08-13 05:42 AM
I currently work for a midsized company where we have a bunch of FAS files lying around, just sitting there doing nothing. We plan on moving forward to the next step which is backup to disk, that way we would get rid of the good ol tapes! We are about to evaluate DataDomain appliances, im pretty sure yall know them but for those of you who dont those are appliances that allow you to backup stuff directly to their disks and they dedup too.
Why would you guys choose netapp over them? is there any advantage on using netapp dedup technology?
2010-08-16 05:53 AM
As pure backup-to-disk appliances these devices are fine. However, from my point of view, the biggest difficulty with backup in general is time taken to copy a dataset from primary storage to secondary/backup storage.
I'm sure these devices do inline compression and deduplication just fine but they rely upon full copies of the data to get those results. We use Snapvault and OSSV to a fairly good extent here. This allows for the equivalent of full backup copies by transferring only deltas across the network. This significantly reduces the time required to perform backups, especially if your data is not significantly changing from one day to the next. The primary data is also hit less hard when doing backups, which is important if you're a 24 hour business and don't want to impact operations.
Datasets are continually growing in size but there is no more time in a day for repeatedly copying the same data over and over from one storage medium to the next...
Just my thoughts.
2010-08-16 07:04 AM
We have 3 sites production, standby and archive. We use snapmirror between prod and standby and snapvault for archive.
There are a couple of issues with snapvault you have to watch out for.
There are solutions to all of the above so do not worry to much, just think about them in the design.
Hope it helps
2010-08-16 07:52 AM
My two cents as well:
Data Domain can do deduplication very efficiently, inline, etc., etc. To me the key difference is, it is just a backup target presented to a backup application (say Backup Exec) which takes care of application consistency, so the actual backup process heavily interferes with application hosts. Having said this, incremental / differential backups are still possible, as not every Backup Exec job has to be defined as full.
NetApp snapshots on the other hand do very little to application hosts - they just put them into a hot-backup mode & the rest happens directly on the storage array (that is not the case for OSSV though).
2010-08-16 09:53 PM
You have to be careful regarding number of snapshots. 255 is actually number of times block can be shared. So when you use snapshots and deduplication this effectively limits either deduplication ratio or number of snapshots.
I wonder what happens if block sharing limit is reached and you try to create snapshot. Hopefully will find time to test it.