2008-10-07 07:41 AM
Does NetApp provide an automated mechanism for SnapVault to handle a SnapVault volume that will exceed the 16TB volume limit?
It seems like a very manual process to manage when 16TB is reached.
Is there some script or product from NetApp that allows backups to exceed 16TB without intervention?
2008-10-07 03:22 PM
It is a great question and one I would also like to know the answer too, as I have just purchased a new filer with a nearstore license to be used as a snap vault target. I know DFM will alert about the aggregate being full...
2008-11-04 04:53 AM
I have not received a response to this question. Am I to suspect that NetApp does not have a solution to this issue?
The problem is very real for us when trying to eliminate tape backup and 16TB volumes are inadequate archive sizes.
2008-11-05 03:03 PM
Hello First Last!
Sorry you've not had a reply to date. The 16TB is today rather a hard limit, so most folk would probably initialise to a new volume in a new aggregate (since the Aggr will also be full: no easy "clone" option to help you answer this one). You might be able to involve our NGS folks to help craft something that could be more "automated" for you (via some scripting perhaps):speak to your SR/SE NetApp contacts.
Oftentimes there is a desire to manage more than 255 snapshots before that 16TB limit is reached, and that can be dealt with through a couple of techniques: see "Retention of more than 255 SnapVault Snapshot copies" at http://now.netapp.com/NOW/knowledge/docs/ontap/rel7251/html/ontap/onlinebk/5snapv24.htm for example.
One part of sizing is to try to size realistically. This entails looking at your data and estimating change rates and retention times. Eliminating tape backups is for many a desirable wish, but you do need to know how you are going to handle it over the retention periods you need. I am always very aware that if I have 1TB primary data and a desire to keep weekly full backups for a year (for example, then I am already staring down the barrel of 52TB tape using traditional tape: not a great ratio !
With SnapVault I find that many types of data result in weekly change rates of only around 2% of the data protected, so those 52 "copied" suddenly only take 20GB each, and hence the full year might only need just over 2TB. Even a relatively high change rate of 5% per week would still only need 2.6TB for that 1TB live data: I'm already managing much less than I am with tape. With deduplication, I will probably find this drops even further: with ONTAP 7.3 onwards, you can send SnapVault transfers across and have it automagically deduplicated with FAS deduplication, and subsequent transfers too can be reduced. That can help you to effectively get a higher potential limit than the 16TB today to enable more secondary space for your backups, although clearly YMMV when it comes to identifying precisely what savings YOUR data might yield you.
As an aside, you can consider using FlexClone to help work those space savings out without actually "touching" live data: take a volume, clone it, run FAS deduplication against that clone. The clone takes no physical space when created, and although clearly the live data volume blocks will remain untouched, running dedupe against your clone will tell you exactly what space savings could be gained through deduplication. Hope this helps !
2008-11-05 05:44 PM
Thank you for elaborating.
I was hoping that Protection Manager would have automated some SnapVault continuous backup capability with a combination of Flexclone in which the destination volume can be configured to be cloned, for example, every month or week or when snapshots will exceed 255. So that monthly, weekly, yearly(?) etc. copies of the destination volume are available via clones while not consuming duplicate capacity. I will check to see if NetApp has developed scripts to automate such a scheme.
2008-11-06 11:53 AM
So Protection Manager *will* assign new secondary resource if an aggregate fills up and continue replications: the difficulty being that we cannoteasily "clone" to avoid any re-initialisation. But if you are asking whether PM can handle this autpmatically, the answer is yes.