ONTAP Discussions

Move volume of dr vfiler to other aggregate?

dietmareberth
6,633 Views

Hello,

it's possible to move the destination volume of the DR vfiler to another aggreagte?

I don't want to recreate the DR vfiler with it's data volumes, because they are very big.

I have quiesced and breaked the Snapmirror of the DR vfiler and tried to rename the

destination volume, but i get the error "... is a resource of DR backup vfiler". After

that i wanted to create with ndmpcopy a new volume on the other aggreagte with

the old volumename. But it's not possible..

Can i delete the Snapmirror of the DR vfiler and create a new Snapmirror to the new

volume on the other aggregate? Or is the DR vfiler damaged after this action and

Snapmirror of datavolumes are still active?

Thx

1 ACCEPTED SOLUTION

scottgelb
6,633 Views

Sure...Some manual steps and you can do this... but the volume name must match the source....basic outline of steps assuming vol3 is the volume name and the shell game of moving things around.  You only will be modifying the one volume location but need to resync all volumes after with dr resync...and need to destroy the vfiler first... you could also activate the dr vfiler and make changes, but I'd destroy it and start over...  Review the process, use at your own risk, and test on a test vfiler or simulator...but this is the basic outline of what to do...   The key thing is to cascade the mirror locally on the same controller then you save having to snapmirror over the wan...and get the same result by running the mirror update with vfiler dr resync to the new location on the new aggregate while keeping the volname the same.

Break all mirrors and Destroy the vfiler

     snapmirror break

     vfiler destroy vfilername

Cascade the mirror locally on the same target controller

snapmirror initialize -S vol3  vol3_new   # vol3_new is on the other aggregate...assuming you created it already.

Quiesce/Break the mirror and rename volumes...the shell game to workaround having to remirror from the source..  I would also quiesce mirrors for the vfiler first but not listed in detail below.

snapmirror quiesce/break vol3_new

vol rename vol3 vol3_old

vol rename vol3_new vol3


Take a backup of /etc/snapmirror.conf on the target... the dr resync will set all volumes in the vfiler to every 3 minutes...so back it up then restore after

copy /etc/snapmirror.conf  (rdfile it or copy to a backup...)... this is on the root volume of vfiler0 since vfiler dr uses vfiler0 for replication


Create the vfiler manually with the -r option so you only specify the rootvol, Ifconfig the IP for the vfiler and edit /etc/rc to match, dr resync, fix snapmirror.conf

vfiler create vfilername -r /vol/vfiler_root -b vfilername     # -r is a nice way to pick up all volumes with only specifying the root volume... the rootvol has all info on all volumes and picks them all back up for you.

ifconfig interface ip subnet                                                # could be an ifconfig alias command too if an existing interface.. make sure to edit vfiler0 /etc/rc if needed for this IP (probably already there)

vfiler status -a                                                                    # confirm the vfiler has a configured interface (doesn't show "unconfigured" which it did prior to the ifconfig)

vfiler dr resync -c secure vfilername@source                   # this will resync all volumes...from the last common snap..including vol3 which was moved but has a matching source snapshot now

vfiler status -a   # it will show vfiler dr and the same vol3 as before but on a new aggregate

copy /etc/snapmirror.conf to put the prior schedule back instead of 0-59/3 which dr resync modified...this is a pet peeve of mine that existing schedules get edited, but not a big deal and an easy workaround to backup and restore... just have to remember to do it or you have mirrors running all the time.

Destroy vol3_old when you know all is ok..the key thing is to not destroy any volumes...destroying the vfiler puts all volumes back to vfiler0, then we manually recreate the vfiler with those volumes then resync it

vol offline/destroy vol3_old

View solution in original post

11 REPLIES 11

scottgelb
6,634 Views

Sure...Some manual steps and you can do this... but the volume name must match the source....basic outline of steps assuming vol3 is the volume name and the shell game of moving things around.  You only will be modifying the one volume location but need to resync all volumes after with dr resync...and need to destroy the vfiler first... you could also activate the dr vfiler and make changes, but I'd destroy it and start over...  Review the process, use at your own risk, and test on a test vfiler or simulator...but this is the basic outline of what to do...   The key thing is to cascade the mirror locally on the same controller then you save having to snapmirror over the wan...and get the same result by running the mirror update with vfiler dr resync to the new location on the new aggregate while keeping the volname the same.

Break all mirrors and Destroy the vfiler

     snapmirror break

     vfiler destroy vfilername

Cascade the mirror locally on the same target controller

snapmirror initialize -S vol3  vol3_new   # vol3_new is on the other aggregate...assuming you created it already.

Quiesce/Break the mirror and rename volumes...the shell game to workaround having to remirror from the source..  I would also quiesce mirrors for the vfiler first but not listed in detail below.

snapmirror quiesce/break vol3_new

vol rename vol3 vol3_old

vol rename vol3_new vol3


Take a backup of /etc/snapmirror.conf on the target... the dr resync will set all volumes in the vfiler to every 3 minutes...so back it up then restore after

copy /etc/snapmirror.conf  (rdfile it or copy to a backup...)... this is on the root volume of vfiler0 since vfiler dr uses vfiler0 for replication


Create the vfiler manually with the -r option so you only specify the rootvol, Ifconfig the IP for the vfiler and edit /etc/rc to match, dr resync, fix snapmirror.conf

vfiler create vfilername -r /vol/vfiler_root -b vfilername     # -r is a nice way to pick up all volumes with only specifying the root volume... the rootvol has all info on all volumes and picks them all back up for you.

ifconfig interface ip subnet                                                # could be an ifconfig alias command too if an existing interface.. make sure to edit vfiler0 /etc/rc if needed for this IP (probably already there)

vfiler status -a                                                                    # confirm the vfiler has a configured interface (doesn't show "unconfigured" which it did prior to the ifconfig)

vfiler dr resync -c secure vfilername@source                   # this will resync all volumes...from the last common snap..including vol3 which was moved but has a matching source snapshot now

vfiler status -a   # it will show vfiler dr and the same vol3 as before but on a new aggregate

copy /etc/snapmirror.conf to put the prior schedule back instead of 0-59/3 which dr resync modified...this is a pet peeve of mine that existing schedules get edited, but not a big deal and an easy workaround to backup and restore... just have to remember to do it or you have mirrors running all the time.

Destroy vol3_old when you know all is ok..the key thing is to not destroy any volumes...destroying the vfiler puts all volumes back to vfiler0, then we manually recreate the vfiler with those volumes then resync it

vol offline/destroy vol3_old

dietmareberth
6,603 Views

Thanks a lot!

All went well.

scottgelb
6,603 Views

Very good

Typos Sent on Blackberry Wireless

BERRYS1965
6,603 Views

Hi Scott
I know this is a fairly old post, but I am very keen to move several large DR vfiler attached volumes to a different aggregate. I have tried to followed your above advice, but am having trouble understanding whether you are advising to delete the source vfiler or the destination vfiler?

If it is the source vfiler, how do you resetup the DR relationship without it causing a snapmirror initialise from scratch on all the destination volumes? This is also not an option for me as I cannot have a share outage from the source vfiler.

If it is the destination vfiler that needs destroying, the only way I can find to do that is with a "vfiler dr delete" command. This then leads me back to the "How can I resetup the DR relationship without a full initialise of the snapmirrors?" question.

Any information would be greatly appreciated as I have approximately 12 TB of destination volumes I need to move to a different aggregate. I am running NetApp Release 8.1.2 7-Mode.

Kind Regards

Scott

scottgelb
6,603 Views

It's been a while with cDot most of the time now. Got to work at a customer vFiler migrate last night and revisiting vfilers is all good

You can recreate the target vFiler with a new dr configure and use the -s flag to reference an existing mirror relationship. Or maybe easier to dr activate if no duplicate ips or a bubble network. Then recreate the vFiler or modify volumes so the names are the same as the source after moving to a new aggregate. Then vFiler dr resync after to put back in dr mode. That is likely the easiest method.

Sent from my iPhone 5

scottgelb
6,603 Views

Just Reread the original post. The method I gave is destroying he vFiler at the target. All commands on the target and leave the source vFiler running.

BERRYS1965
6,603 Views

Hi Scott

Thanks heaps for your reply, much appreciated.

I tried to follow your suggestions but ran across several issues:

vfiler destroy <vfilername>, when run on a DR vfiler returns the error:

Vfiler test-vfiler is part of the DR configuration for a remote vfiler. Use "vfiler dr delete test-vfiler@nakg01-02” to destroy this vfiler.

I tried to skip the above step but when I got to the renaming of the target volume from “test_vfiler_01” to “test_vfiler_01_old”, I got the error:

vol rename: Volume: 'test_vfiler_01_old' is a resource of DR backup vfiler.

So at this stage I used “vfiler dr delete”.

When I did the “vfiler configure”, the snapmirrors did a complete initialize and started from scratch. You suggested using the “-s” option for “vfiler dr configure” but I’m not sure this does what you think it does:

man vfiler: vfiler dr configure...Synchronous Snap-mirror can be used for data transfer by specifying the -s option.

I did however find the option “-u” which prevents a snapmirror initialise when doing a “vfiler dr configure".

So after a little perseverance (which is when I came across the “-u” option in vfiler dr configure), I successfully achieved a DR volume migration using the following process:

# Check size of volume to be migrated, create volume in the new aggregate and restrict

dst_filer> vol size test_vfiler_vol01

dst_filer> vol create test_vfiler_vol01_new -s none dst_filer_agg01 100g

dst_filer> vol restrict test_vfiler_vol01_new

# Quiesce and Break snapmirrors to the volume to be migrated

dst_filer> snapmirror quiesce test_vfiler_vol01

dst_filer> snapmirror break test_vfiler_vol01

# Snapmirror initialize volume to be migrated to the new volume and monitor progress

dst_filer> snapmirror initialize -S dst_filer:test_vfiler_vol01 dst_filer:test_vfiler_vol01_new

dst_filer> snapmirror status

# Once snapmirrored, quiesce and break

dst_filer> snapmirror quiesce test_vfiler_vol01_new

dst_filer> snapmirror break test_vfiler_vol01_new

# Quiesce and break all remaining snapmirrors from SRC vfiler to DST vfiler

dst_filer> snapmirror quiesce test_vfiler_root

dst_filer> snapmirror break test_vfiler_root

# Delete the DR vfiler and rename the migration volumes

dst_filer> vfiler dr delete test-vfiler@src_filer

dst_filer> vol rename test_vfiler_vol01 test_vfiler_vol01_old

dst_filer> vol rename test_vfiler_vol01_new test_vfiler_vol01

# Recreate DR vfiler, MAKING SURE to use the "-u" option (this prevents volume snapmirrors from initialising from scratch).

dst_filer> vfiler dr configure -u test-vfiler@src_filer

# Resync all volumes from the SRC vfiler to the DST vfiler

dst_filer> snapmirror resync -S src_filer:test_vfiler_vol01 dst_filer>:test_vfiler_vol01

dst_filer> snapmirror resync -S src_filer:test_vfiler_root dst_filer>:test_vfiler_root

PS. I couldn't do the “bring the DR vfiler online” suggestion due to it having the same IP as the source vfiler.

Thanks again for your help, it got me on the right track.

Kind Regards

Scott

scottgelb
6,603 Views

My apologies…not intuitive and haven’t used it in a while with all the cDOT the last year…correct the flag to tell vfiler dr to use an Existing SnapMirror and not reinitialize is “-u”… I have it documented in the labs we gave at the NetApp conferences for several years with all the steps including the workaround Prior to ONTAP 7.3.5 when we didn’t have a –u option… the workaround was to manually create the vfiler on the target with “vfiler create –r” to match the source then stop the vfiler and run vfiler dr resync which had the same net result of not initializing mirrors but with a lot more commands and complexity. Unforutnately for vfiler migrate there is no –u option nor a way to resync the migrate so for migrations it is always a full initialize…in these cases we often used vfiler dr and activate for those migrations instead of migrate when a short outage was possible for this reason.

In the other scenario they activated the dr vfiler then destroyed it but vfiler dr delete will also destroy it if not activated.

BERRYS1965
6,603 Views

Hi Scott
All good, no need for apologies, I am very grateful to anyone kind enough to offer advice or suggestions.
It all now makes sense after realising that the commands involved bringing the target online. You may be surprised at how often my inability to online DR vfilers, due to ip conflicts, causes me angst.

Have a great day.

Scott

scottgelb
4,776 Views

Private message me your email address and I will send you the labs we wrote several years ago for the insight conferences. several features, best practices and workarounds and other cool stuff with vfilers. Not all documented anywhere else.

Sent from my iPhone 5

danmoorecows
4,776 Views

I am in this same process right now.  I have 5 shelves in one aggr being replaced in the morning and of course all root volumes are on this aggr.  We run 8.1.3 is the process the same?

This post definitely puts thoughts in your head.  Thanks for the write up. I would also be interested in the documentation you mentioned above. 

Thanks.

Public