vol move with a snapvault destination


I've moved a src volume to a new aggregate. Now I'd like to move the snapvault destiantion volume too.

dstination> vol move start VUMEF006_svd_VUMEF004_nas_vol003 aggr_2000_02

vol move: Specified source volume has a qtree snapmirror destination.

snapvault status:

VUMEF004:/vol/VUMEF004_nas_vol003/RVC_01           VUMEF006:/vol/VUMEF006_svd_VUMEF004_nas_vol003/RVC_01           Snapvaulted    19:46:10   Idle

How would I do this best?

vol move with a snapvault destination

Ok, I'm not sure if this was the best solution, but I did a snapvault stop on the secondary, moved the volume to a new aggregate and then issued a snapvault start. This started a new baseline snapshot. The volume was a small one so the baseline snapshot didn't hurt much. The old snapshots are still there.

But I'd like to know how could move a snavault destantion volume/qtree, without stopping first the snapvault relationship and without the need to start a new baseline snapshot.

Re: vol move with a snapvault destination

You could just do “snapmirror resync”. It would pick up the latest common snapshot (basically, at the point where you stopped snapmirror) and continue from there.

Re: vol move with a snapvault destination

I'm not sure that we have a snapmirror license, at least we don't use it yet. Is there no command get the same result with snapvault commands. We have much larger volumes/qtrees with 8+ TB. If I have to move one of these volumes to a different aggregate on the source side, I'll also have to move it on the destination side. But I don't see an easy way to do this yet.

vol move with a snapvault destination

Did you find any good answer to this? I'm in the same boat. If not then I will try to engage our SE.



vol move with a snapvault destination

Well, I think I can answer my own question:

1. create a new volume in the other aggregate at least as big as the old volume (vol create new_vol new_aggr <size>)

2. restrict the new volume (vol restrict new_vol)

3. make a copy of the old volume to the new (vol copy start -S old_vol new_vol) (takes a while, the -S switch specifies that snapshots are to be copied also)

4. vol online new_vol

5. vol offline old_vol

6. snapvault start -S <existing_pri> <new_sec>  (does NOT re-do a baseline)

7. snapvault update <new_sec> (may need to run this twice)

8. on the primary: snapvault release <pri> <old_sec>

9. when everything is confirmed as working, vol destroy old_vol

I tried this with a small volume (~ 1.5GB) and now I'm doing the same with a 350GB volume.

Let me know if you have questions!


vol move with a snapvault destination

"snapvault start -S -r" will reestablish the vault without rebaselining...the -r to restart updates... the vol move method should work fine but copying the volume works too since vol move is doing the same thing in the background but less commands for vol move and automates a lot of the steps.

Re: vol move with a snapvault destination

I tried a few methods, but what seems to work the best is the following:

  1. Break the snapmirror
  2. Rename dest vol (vol rename dest_vol dest_vol_old)
  3. Create a new volume in the desired aggregate with the original name (dest_vol)
  4. Do a volume copy from the old to the new volume.  Make sure to use the -S flag to bring the snapshots.

          vol copy start -S dest_vol_old dest_vol

    5.  Resync the snapmirror

          snapmirror resync -f dest_vol

    6.  Destroy dest_vol_old whenever you're ready

This method saves having to do anything with snapvault.  NDMPCopy doesn't bring the snapshots over so you'd have to reinitialize the snapmirror after the copy. 

I hope this helps.

Re: vol move with a snapvault destination

I have found the same problem when trying to move snapvaulted volumes.  The "vol move" doesn't seem to want to work with snapmirror (or snapvaulted) volumes so you have to do the process manually.  Now I am not a NetApp Engineer so this may not be 100% correct, but it has worked for me using OnTap 8.1 on a number of volumes.

1. First thing is to disable any external process that might try to update the volume during the move. (i.e. Protection Manager or custom backup scripts)

2. Create the new volume (I also assign volume options, your options may be different)

vol create volname_new aggr size

vol options volname_new guarantee none

vol options volname_new nosnap on

vol options volname_new nosnapdir on

vol options volname_new fractional_reserve 0

vol options volname_new try_first volume_grow

3. Before I do anything I enable Dedup and Compression.

sis on /vol/volname_new

sis config -C true -I true /vol/volname_new

sis start -s -d -f /vol/volname_new

sis status /vol/volname_new

4. Copy the entire contents of the volume including ALL snapshots (because having a snapvaulted volume would be useless without the snapshots)

vol restrict volname_new

vol copy start -S volname volname_new

5. Swap the names of the old and new volumes

vol online volname_new

vol rename volname volname_old

vol rename volname_new volname

vol offline volname_old

So at this point you now you have a new volume with the exact same information in it as the old volume (which is now offline).

Because the new volume has the old name and all of the snapshots have the same name you Protection Manager will still be able to restore any information.

Also snapshots which were "snapvaulted snapshots" are present, so any snapvaulting should proceed flawlessly.  (It is my understanding that the special snapshots "snapvault" or "snapmirror" contain extra meta data which points them back to their partner)

Be just to be sure I normally initiate a manual snapvault update:

snapvault update /vol/volname/qtree

And then enable and run a manual Protection Manager Job or run whatever custom script I use for backups.

Re: vol move with a snapvault destination

Same here - after 'vol move' from one aggr to the other any attempts to start snapvault 'snapvault start' result in the error:

Transfer aborted: the qtree is not the source for the replication destination.

That KB that was quoted to you by the support was not on the subject - conversion from traditional to flex volume. No wonder they where eager to close the case :-)

My workaround was to create new volume on the new aggregate and run 'snapvault start -S' to do baseline transfer.  Looks like the presence of the old SV snapshots in the volume that was moved had something to do with it.