ONTAP Discussions

Moving a Snapmirror destination volume to another aggregate

KDEAN1961
19,031 Views

We are running Ontap 8.1 and need to move a Snapmirror destination volume (13TB) to a new aggregate without re-initializing.

Is the best solution to snapmirror the destination volume to a new volume/break SM between old and new destination/Resync from source to new destination?

I do notice when I do this the orginal SM job goes into a "pending" state because the destination is now busy Snap Mirroring to the new destination.  Being that the volume is 13TB the new SM job (old destination to new destination but in same DataCenter) will take about 3 days to complete  and the existing SM source -> desination  will stay in a pending state

Just checking if there are any other options.

Thanks

Kathy

32 REPLIES 32

KDEAN1961
5,043 Views

Also, my new destination is on a totally separate filer/aggregate than the existing destination.

aborzenkov
5,043 Views

If you have VSM like S => D and want to move destination to D1.

1. snapmirror initialize -S D D1

2. snapmirror resync -S S D1

Now you have fully functional snapmirror S => D1 and can destroy original one.

naveenkumar_e
14,924 Views

Hi,

I think 'pending' status comes only when you're using snapmirror triggered snapshots for base line transfer.

So, better try creating a manual snapshot (eg: snap_base_transfer) at source and let that replicate to original destination.

Then create a new destination volume, restrict it and then initialize transfer between original destination and new one.

With this, the manually created snapshot only has softlock on the original destination, and remaining snapmirrored snapshots will be transferring normally.

But when you do this, monitor the source volume snap reserve, as the manually created snapshot grows in size until all data gets transferred to new destination volume.

For this, you can priorly increase the source volume size.

Hope this helps

-naveen

bsti
7,374 Views

Can you elaborate?  I'm not sure this works.

I create manual snapshots all the time on my original SM source, and that changes nothing. 

If you are referring to the -s option on snapmirror initilaize (to specify a snapshot to transfer), then that does not work unless you are Snapmirroring to a qtree.

bsti
5,042 Views

Hi Kathy, did you ever get this to work for you?

KDEAN1961
5,042 Views

Yes..finally!  Your last post worked...thanks!

DARENNETAPP
5,042 Views

Hi Kathy

Break SnapMirror

create new destination volume

use vol copy and ensure you use the -S switch to take all snapshots

once copy is complete edit SnapMirror.conf as above to point to new destination you should have all common snapshots and it should resync.

clackamas
5,042 Views

Vol copy is much slower than snapmirror.  *ALSO* it does not copy snapshots unless you use the -s flag.  If you don't move the snapshots, then you will not have a base snapshot to resync from. 

DARENNETAPP
5,042 Views

I appreciate Vol copy maybe slower but it works, this has been completed for multiple customers in similar scenarios. Note I used the uppercase -S as this takes all snapshots as I explained above. Lower case -s allows you to choose a specific snapshot which isn't ideal in this case as you need to ensure there is a common snapshot taken across with the copy.

KDEAN1961
4,964 Views

I can't use vol copy when moving to a different controller/aggregate.

DARENNETAPP
4,964 Views

Glad to hear it is sorted.

You should be able to if you state source and destination filer

Sourcefiler:volname destfiler:volname

One of the volumes need to belong to the filer the command is being ran from.

KieranMcKenna
3,735 Views

Hi Kathy,

Your original plan would basically work, but you would see the 'pending' state on the original A>B relationship if you did not BREAK that relationship and were running baselines & updates from B > C (known as a cascading snapmirror). Because you have not BROKEN A>B, it still wants to update on a schedule but finds the destination BUSY, either with snapmirrors or any other outstanding volume operation.

I've moved snapmirror destinations countless times and your method works fine - as I see you got it working eventually above - (and is my preferred one), but at some stage, you must BREAK the original A>B relationship, so that B becomes a writeable volume in its own right. You can then do a final update from B>C, then break B>C and resync C with A. As a precaution, after breaking B>C, I normally rename B to "B_old" and even take it offline. then I usually rename C to B (you normally want to retain the existing destination volume name, it saves editing snapmirror.conf entries) and then resync C with A.

Vol copy is another method, but as mentioned, its slower and doesnt take snapshots across by default.

If you want to move a volume between aggregates on the same controller, the "vol move" command also works well - I've been using this more recently since moving to Ontap 8.1. Simply, you issue the command - "vol move <volname> <destination aggr>" and it goes ahead and creates the internal snapmirror, runs the baseline, incremental updates and manages the cut-over.

It doesnt allow cut-over (or wont even start the process) if CIFS shares are present on the volume - but this still makes it ideal for moving a large non-production SM destination volume.

I used it last night on a 10TB volume and it worked a treat.

One thing I would do as a precaution is "offline" and "online" the destination volume to be moved just prior to running the vol move command, to make sure no pesky "snap list" or "zapi" operations are still processing, otherwise the cut-over will never complete automatically.

Public