2012-05-31 11:10 AM
My Production and Disaster site both have a mix of 32 and 64 bit aggregates. Can we create a snapmirror relationship for a volume in 32 bit aggregte to a volume in 64 bit aggregate. Also, the snapvault source (mirrored volume) and the sanpvault destiantion is on the same filer but a different aggregate.
Since there is mix of 32 an d 64 bit aggregates can we establish mirroring and vaulting among these aggregates. Else, is there one to one relation??
Solved! SEE THE SOLUTION
2012-05-31 01:15 PM
I created a qtree and a cifs share. Created a share named test_qtree in a qtree named test_qtree (volume: vol_test)
Now i apply the policy to MIRROR AND VAULT it using the protection manager.
prod> Snapmirror status test_vol
Source Destination State Lag Status
hpNetApp1b.hpinc.com:test_vol rocNetApp1a:TEST_Datasetxforxmirroringxandxvaulting_mirror_hpNetApp1b_te Snapmirrored 06:10:17 Idle
But when i check the snapvault status. it shows something like this:
DR> snapvault status
rocNetApp1a:/vol/TEST_Datasetxforxmirroringxandxvaulting_mirror_hpNetApp1b_te/- rocNetApp1a:/vol/TEST_Datasetxforxmirroringxandxvaulting_backup_hpNetApp1b_te/TEST_Datasetxforxmirroringxandxvaulting_hpNetApp1b_test_vol Snapvaulted 05:40:43 Idle
rocNetApp1a:/vol/TEST_Datasetxforxmirroringxandxvaulting_mirror_hpNetApp1b_te/test_qtree rocNetApp1a:/vol/TEST_Datasetxforxmirroringxandxvaulting_backup_hpNetApp1b_te/test_qtree Snapvaulted 05:40:43 Idle
Why is that there are two instances??
2012-05-31 01:19 PM
The /- on the source is to specify any data NOT in a qtree and to take all source volume non-qtree data and put in a qtree on the target. The second instance is your test_qtree. The cool part is that protection manager is making sure all data is protected in the volume, not just the qtree you created, but any possible data outside of that qtree in the volume. SnapVault targets are all qtrees, but the source can be non-qtree data in the source volume specified by /-
2012-05-31 01:26 PM
Protection manager is just adding everything in the volume it sees (volume outside of qtrees and qtree)..so in this case the first relationship isn't replicating anything... you can try deleting that relationship since not needed.
2012-05-31 02:23 PM
Scott I have one more thing to ask you. Also I have posted separately
Now, this is the policy that I am applying on the test_vol. I have set the local backup schedule to "none". So this should imply that there should be no snapshots on the test_vol volume right??
But when I hit snap list test_vol, it does show snapshots. I have set a reserve of 20% default.
Q1. Why is there snapshots when I have selected the primary data local backup schedule to "none"?
Q2. What if the snapreserve is set to 0%. Will there be any snapshots??
here is the snap list output... 20% reserved.
2012-05-31 02:31 PM
I posted back via email but don't see it here..might show up delayed... with snapvault there are baseline snapshots...they increment and are used to keep the relationship to the target...so matching snaps incrementing on source and target. With a different retention of snaps on the target but still at least one common snap needed on both source and target. Having 0% snap reserve will just change reporting...snapshots on the source will show they take active file system space. an accounting thing. As long as enough room for data and the base snapvault snap then it is ok...but I'd leave a little snap reserve to hold the snaps in my setup.
2012-05-31 03:01 PM
Yup Scott .. Got it. But what i really wanted is, maybe the image inserted did not show up before, i have inserted it again here.
I have set a policy using the protection manager, on this particular vol_test. Hope you can see the image below. I have set the local backup to "none". . The local backup corresponds to these snapshots right?? or am I wrong??.. If i am right then on what basis are these snapshots being created and what is the default schedule???
Hope these two images below are visible.
2012-05-31 03:03 PM
These look like you still have a native snapshot schedule... outside of protection manager. You can check with "snap sched test_vol" ... then you can change the schedule to "0 0 0" or vol options nosnap on to disable the scheduler in ontap... then manually delete the hourly, nightly snaps. Leaving the snapvault and snapmirror snapshots.