I have a volume with several qtrees, all of them used as individual NFS mount point by a single machine. When setting up the snapvault relationship, NetApp docs tell me, that using the volume as source of snapvault (/vol/somevolume/-) only copies non-qtee data to the secondary. Reality however seems to be different, because there is no non-qtree data on the primary and therefore the vault should be empty. Which it is not, the systems (8.1.3 7-mode) behave as if all qtrees were automatically part of the snapvault relationship, just like normale subdirectories. What is correct?
The docs seem to suggest individual relationships for all qtrees on the source. Besides the point described above, an advantage should be the capability to use snaprestore on the qtrees instead of having to copy manually when needed. Are there other implications? Because having plenty of individual snapvault relationships for essentially the same service just seem to complicates things and I need the qtrees for tree quotas.
The documentation is correct. /- will snapvault only non-qtrees. I assume you are using 7-mode, because I can only speak for 7mode.
Also, if you dont plan on having your vaults managed by oncommand protection manager, then you can do volume to single dst qtree. I am not sure how /- isn't working for you. I manage a very large snapvault environment where we do qtree based as well as /- because it's all managed from OCUM.
There are quite a lot of qtrees which are snapvaulted with a volume-to-qtree relationship. Hundreds of them, as on a different volume each user has its own qtree quota, but the snapvault relationship is volume-to-qtree. And that's the reasony why I've been glad that all this qtree data actually *is* snapvaulted, it's a lot simpler this way. There is almost no non-qtree data on the source volume and there are TBs of data on the snapvault destination, so I'm pretty sure the snapvailt relationship actually copies qtree data as well.
Hi JGPSHNTAP, Re your comment "...because it's all managed from OCUM" -- why do you think it is the exception rather than the rule (based on my experience, anyway) that OCUM is used to manage snapmirror/snapvault? It seems miles better to me to have all this traffic managed centrally. Thanks! Richard.
I couldn't disagree with you more on this. I've managed environments from 200+ filers and up and we manage via snapmirror.conf file and local snapvault files. I've only recently managed snapvault, (only vault portion), the mirror is still managed via .conf files. But i've also written custom front ends to handle this fan out management via the .conf file with extensive powershell scripting via the API's.
Here's a case in point, if you lose your OCUM centrally manged server to a patch, and you aren't prepared for a failover, non of your managed jobs run. The only thing, we currently use ocum for from a mirror perspective is alerting on the datasets. I will take fan out, conf file management any day. But you need to have the proper controls in place.
If you are talking about a mom and pop shop with a small number of filers, go for it.