ONTAP Discussions
ONTAP Discussions
We are replicating Windows 2003 data to Filter with OSSV.
Now we want to stop OSSV, however "snapvault stop" command will delete the qtree.
We don't want this, because the replication purpose is for data mingation into the filer.
How can I stop or release OSSV wothout deleting the secondary qtree ?
I know the snap shots will not be deteted.
Do I have to utilize the snapshot ?
The best way is not that we do not delete the qtree.
Please let me know.
Best regards.
Solved! See The Solution
stop [ -f ] secondary_qtree
Available on the secondary only. Unconfigures the qtree so there will be no moreupdates of the qtree and then deletes the qtree from the active file system. The deletion of the qtree can take a long time for large qtrees and the command blocks until the deletion is complete. The qtree is not deleted from snapshots that already exist on the secondary. However, after the deletion,the qtree will not appear in any future snapshots. To keep the qtree indefinitely, but stop updates to the qtree, use the snapvault modify -t 0 command to set the tries for the qtree to 0.
Either that or you could create a new qtree in the secondary volume and copy the data into it either via NFS/CIFS or with ndmpcopy. That way the original qtree will exist in snapshots and the new qtree containing the latest copy of the data will exist in the active filesystem.
stop [ -f ] secondary_qtree
Available on the secondary only. Unconfigures the qtree so there will be no moreupdates of the qtree and then deletes the qtree from the active file system. The deletion of the qtree can take a long time for large qtrees and the command blocks until the deletion is complete. The qtree is not deleted from snapshots that already exist on the secondary. However, after the deletion,the qtree will not appear in any future snapshots. To keep the qtree indefinitely, but stop updates to the qtree, use the snapvault modify -t 0 command to set the tries for the qtree to 0.
Either that or you could create a new qtree in the secondary volume and copy the data into it either via NFS/CIFS or with ndmpcopy. That way the original qtree will exist in snapshots and the new qtree containing the latest copy of the data will exist in the active filesystem.
Hello Andrc.
Thank you.
However, I really can't believe that we do not have anyother way to keep the data under the qtree on secondary storage.
I would like to make the qtree read.write straight way.
Do we always have to have a volume space to copy the original secondary qtree to make read/write ?
What it comes down to is that SnapVault is an archiving solution and is designed that if you need to access archived data you restore it to the primary or another location.
If you want to replicate data to a secondary location and access it read/write in that secondary location then SnapMirror would be the correct product.
Hi,
Another hackish way of doing this is just to "move" the volume. The "tries = 0" stuff has not always worked. My hack works like this:
If your OSSV destination is, for example, /vol/vol1/my_windoze_server_c_drive, then...
1) do 'vol rename vol1 vol1_temp'
2) then 'vol create vol1 aggrN 2g'
3) then 'snapvault stop /vol/vol1/my_windoze_server_c_drive' , you will get an error message about the qtree already being gone, but that doesn't matter. The configuration is effectively erased.
4) then we get rid of our dummy volume 'vol offline vol1' and 'vol destroy vol1'
5) then 'vol rename vol1_temp vol1'
6) then 'snapvault snap unsched vol1' and confirm.
Now you have a qtree that snapvault no longer cares anything about. Whatever CIFS shares you had will have followed the volume name changes, so no worries there.
Hi Andrec,
Thank you. I understood the concepts for OSSV (Snap Vault).
Hi Shaunjurr,
I really appreciate for your solution.
It is excellent. Accually I do not have a disk space to copy the target qtree.
Now I can solve it.
Thank you very much.