Data Backup and Recovery

SC3.3.0 Wish List


This thread will be used to create a Wish List for SC3.3.0



It would be nice to have the Snapmirror/Snapvault update depending on the policy.

So parameter NTAP_SNAPMIRROR_UPDATE and NTAP_SNAPVAULT_UPDATE should be extended to allow definition based on policies, so for example:

NTAP_SNAPVAULT_UPDATE=Y (Snapvault update is run on all policies)
NTAP_SNAPVAULT_UPDATE=N (never do a snapvault update on any policy)
NTAP_SNAPVAULT_UPDATE=daily (Snapvault update is run on policy "daily" only, not on other policies)
NTAP_SNAPVAULT_UPDATE=daily,weekly (Snapvault update is run on policy "daily" and "weekly" only, not on other policies)
etc ...

same for Snapmirror

For example my customer would like to snapvault update only on daily and weekly, but not on hourly. This would require two different configs. With the new approach everything could be defined in one config.


Hi Walter,

Great feedback...we add this to 3.3



Yo Keith,

Thanks for your work on this (and to your colleagues too).  I'd like to request the ability to execute snapvault restores for qtrees via snapcreator, as one of the "--action restore" options.

Although we can obviously copy individual files out of the .snapshot directory being able to manage the baseline and incremental qtree restores via snapcreator would be helpful.

A couple reasons this would help us:

-Accessing the snapvault target qtree via NAS protocols may be challenging due to network/security restrictions.

-Incorporating snapvault qtree restores into snapcreator extends the snapcreator functionality umbrella, which is helpful if we have to operationalize SnapCreator for admins that don't have a full understanding of the underlying NetApp technology.



Hi Dave,

This is a great idea but tricky, lets get into details. So if I understand you want to be able to within a secondary snapshot select individual qtrees and only restore them back to primary?

We are definately going to add ability to do a snapvault restore which will bring the secondary snapshot back to primary but from there you would need to grab the individual qtrees you need or do a snapshot restore. Would this be enough, the theory is then you would have access to the snapshot since they are on primary again.

If you indeed need to do qtree level restores then do you have any ideas on how it should work. I am thinking we need to do OS based commands for example (unfortunately we cant manipulate qtrees from OnTap):

1. snapvault restore (snapshot now recovered on primary)

2. ls -l /mnt/.snapshot/<snapshot>

3. Choose qtree to recover?

4. cp /mnt/.snapshot/<snapshot>/<qtree> /mnt/<qtree>

Is this what you are wanting? This is just an example, I know the cp command needs some help with proper arguments ;D

Let us know we can certainly improve this greatly for 3.3 with your help 🙂



Hey Keith,

I'm specifically talking about step 1 that you mention.  The way you explain is a little confusing -- based on what I read here:

and what I tested yesterday, a snapvault restore does restore the whole qtree -- that's the only kind of snapvault restore that exists (if you are snapvaulting qtrees).  I may be missing something...

So, let's say someone wanted to use SnapCreator to do a qtree restore.  They would first need to know:

-The primary qtree to restore and the secondary qtree it is being restored from.  These would need to be the absolute paths on both the primary and secondary arrays

-The snapshot on the secondary array to restore from.  The available snapshots can easily be seen with the --action snaplist option.

As part of the SnapCreator --action restore process, it could just have a section for snapvault qtree restores where it queries the user for the snapshot and qtree.  SnapCreator could list out the volumes it knows to act on based on the config file and then the qtrees involved in a snapvault relationship.

I can envision the tool asking the user to then provide the primary qtree restore target (with providing an alternate as an option), the secondary qtree source, and the name of the snapshot on the secondary.  SnapCreator could then attempt an incremental restore and if that fails initiate a baseline restore.

Then the restore runs.  At this point it might be helpful if there was a 'snapvault status' option for SnapCreator so the restore progress could be monitored (nice to have but not mandatory).  Once completed the user would then have to restart the snapvault relationship outside of SnapCreator, much like the requirement for initiating the relationship before SnapCreator took over the snapvault schedule initially.

Does this make sense?  Thanks!


Hi Dave,

Thanks for the details...yeah I was on another planet so disregard my comments but I wanted you to explain how you thought the process should* work. I will talk about this with the team but just so I have the steps right:

1. prompt for secondary restore

2. select secondary snapshot

3. select secondary qtree (restore source)

4. Select primary qtree (restore destination)

5. start snapvault restore

6. Provide some status




This is perfect.  It doesn't necessarily have to work exactly this way but I just thought it might make sense like this.

If you guys decide to add this functionality I'm sure you'll tweak it out appropriately.




Hi Dave,

How about this? Does this meet your requirement (btw this isnt a mock but actual code). We felt it was absolutely critical to get snapvault restore into 3.2...I am hopeful that will happen, need to do some more debugging and testing, but it looks good

End Customer:  My Data is gone  I am missing file Simulator-7.2.3-tarfile-v21.gz, I hate you storage admin
ha-filer1*> ls /vol/mig_test3/qtree1
Storage Admin: Well we don't have the data on primary but how about we restore your data from secondary storage?
db2test:~/sandbox/snapcreator # ./ --profile test --action restore --policy daily
### You have chosen to do a snap restore on one or more volumes for the Config: test Policy: daily ###
Are you sure you want to continue (y|n)? y
01. Primary View
02. Secondary View
Select a View (enter a number or "q" to quit): 02
### Volume Menu for bck-filer2 ###
01. mig_test
Select a volume for snapshot restore (enter a number, "n" for next filer, or "q" to quit): 01
### Snapshot Menu for bck-filer2:mig_test ###
01. test-SV_daily_20100330181221 (Mar 30 2010 16:33:36)
02. test-SV_daily_20100328203141 (Mar 28 2010 18:46:34)
03. test-SV_daily_20100328194827 (Mar 28 2010 18:03:19)
04. test-daily_20090226133016 (Feb 25 2009 18:37:14)
05. test-daily_20090226105901 (Feb 25 2009 16:06:51)
06. test-daily_20090226091249 (Feb 25 2009 14:20:41)
07. test-daily_20090226090831 (Feb 25 2009 14:16:34)
08. test-daily_20090226090605 (Feb 25 2009 14:14:04)
09. test-daily_20090226090130 (Feb 25 2009 14:09:27)
10. test-daily_20090226085725 (Feb 25 2009 14:05:18)
11. test-daily_20090226085607 (Feb 25 2009 14:02:58)
12. test-daily_20090226085514 (Feb 25 2009 14:02:05)
13. test-daily_20090221054111 (Feb 20 2009 10:40:17)
14. test-daily_20090221053137 (Feb 20 2009 10:30:46)
15. test-daily_20090221052928 (Feb 20 2009 10:28:35)
16. test-daily_20090221045707 (Feb 20 2009 09:56:14)
17. test-daily_20090220101104 (Feb 19 2009 15:07:06)
18. test-daily_20090211123508 (Feb 10 2009 16:48:45)
19. test-daily_20090211092153 (Feb 10 2009 13:35:28)
20. test-daily_20090210081045 (Feb 09 2009 12:15:27)
21. test-daily_20090210080540 (Feb 09 2009 12:10:22)
22. test-daily_20090210073456 (Feb 09 2009 11:39:46)
23. test-daily_20090210073242 (Feb 09 2009 11:37:39)
24. test-daily_20090210072908 (Feb 09 2009 11:33:50)
25. test-daily_20090205102105 (Feb 04 2009 13:37:48)
26. test-daily_20090205084926 (Feb 04 2009 12:06:24)
27. test-daily_20090205084652 (Feb 04 2009 12:03:53)
28. test-daily_20090205084131 (Feb 04 2009 11:57:28)
29. test-daily_20090205083735 (Feb 04 2009 11:53:28)
30. test-daily_20090205083641 (Feb 04 2009 11:52:23)
31. test-daily_20090205083528 (Feb 04 2009 11:51:14)
32. test-daily_20090205083448 (Feb 04 2009 11:50:30)
33. test-daily_20090205083149 (Feb 04 2009 11:47:31)
34. test-daily_20090205080800 (Feb 04 2009 11:23:43)
35. test-daily_20090204141322 (Feb 03 2009 17:19:11)
36. test-daily_20090204140457 (Feb 03 2009 17:10:41)
37. test-daily_20090204140315 (Feb 03 2009 17:09:00)
38. test-daily_20090204135823 (Feb 03 2009 17:04:13)
39. test-daily_20090204135509 (Feb 03 2009 17:01:46)
40. test-daily_20090204135152 (Feb 03 2009 16:58:31)
41. test-daily_20090204105550 (Feb 03 2009 14:02:42)
Select a snapshot for restore (enter a number or "q" to quit): 01
### Restore Menu for bck-filer2:mig_test snapshot test-SV_daily_20100330181221 ###
01. Qtree Restore
Select a restore type (enter a number, "n" for next filer, or "q" to quit): 01
### Qtree Menu for bck-filer2 ###
01. /vol/mig_test/keith1
Select a qtree for bck-filer2:mig_test to restore (enter a number, or "q" to quit): 01
Do you want to restore to original location for /vol/mig_test/keith1 [keith3:/vol/mig_test3/qtree1] (y|n): y
WARN: You are about to restore bck-filer2:/vol/mig_test/keith1 to keith3:/vol/mig_test3/qtree1 using snapshot test-SV_daily_20100330181221
Are you sure you want to continue (y|n)?y
INFO: NetApp Snapvault Restore for bck-filer2:/vol/mig_test/keith1 Started Successfully
Storage Admin: SnapCreator and NetApp Rock!!!

ha-filer1*> ls /vol/mig_test3/qtree1

End User: Yeah I love my storage admin


Sweet man, you nailed it!  This is exactly what I was thinking.

The tool just walks you through each step and tells you exactly what's happening.  The only thing I might add (and this is minor) is a warning before the qtree restore begins that suggests unmounting the qtree if it an NFS or CIFS share itself lest we encounter a stale file handle.  Beyond that this would make me very happy!



Yo got it Dave, added this to restore when you are restoring to original primary path

WARN: Before restoring Qtree ha-filer1:/vol/pri_keith/qtree3 you should unmount or unmap the file system

Anything else?