ONTAP Discussions

aggregate / vfiler migration

rajesh
4,988 Views

Hi All,

I'm trying to put a plan for aggr/vfiler move to a new filer.

senario;-

I have a 3070 filer, has 1 vfiler and has 2 aggr inside it with FC disks.

Plan to move to a 3040 Cluster filer with 2 aggregates pointing to 85disks, now with 3040 head, how can i make sure that vfiler is migrated with its quotas and cifs shares are

intact...any pointers to such process already done is of great help.

i have read vfiler migration..but that does not talk with this context...

regards

rajesh

6 REPLIES 6

scottgelb
4,988 Views

vfiler migrate will take care of cifs shares and quotas. Since the vfiler has it's own root volume, all of those settings migrate with it. The key things you must do on the destination system prior to migrate... assuming you are going to use snapmirror (vfiler migrate uses snapmirror unless you specify "-m nocopy" which uses the snapmover feature...all disks must be viewable to both controllers if this is the case since a disk reassign is used instead of snapmirror...this won't be the case from a 3070 to a 3040). You will not be migrating aggregates...migrate with snapmirror works at the volume (flexvol or qtree level) by migrating all volumes and the vfiler itself... you can have any mix of aggregate numbers on the source and dest as long as the volumes match up with the same name and size for snapmirror...you can even use different disk types (fc to sata or vice-versa)...

1) create all volumes of the same name and same size (or larger) on target

2) ensure all licenses are the same on the target (for example...if cifs is licensed on the source, it must be licensed on the target)...this is from vfiler0 context since the vfiler itself runs from licenses in vfiler0. Make sure you have a snapmirror license too..

3) make sure the network can handle the vfiler move (vlans, routing, etc)...

Do a "vfiler migrate start" on the target 3040... then monitor status with snapmirror status and "vfiler migrate status"... for cutover, use "vfiler migrate complete" which will stop the source vfiler, do final mirror updates, then activate the vfiler on the target... make sure lag times are low on mirrors so the cutover completes in a timely manner before timeouts... cifs clients will have to reconnect (very similar to a cluster failover). Also, the vol options fs_size_fixed stays on so you should turn that off for each volume on the target (vols that were in the vfiler). The vfiler migrate commands require rsh enabled on source and dest and you specify vfilername@sourceIP from the target system...sourceIP of vfiler0...

I would test this in the simulator or a test vfiler first to get comfortable with it and prove concept... it works great but you have to be careful to make sure all your infrastructure can handle the vfiler move...

rajesh
4,988 Views

Thanks Scott for coming back, there is a slight change here.

Here what we are doing is not migrate but Move (sorry I should have clarified) we have two aggregates which are having x number of volumes and a VFILER holds this.

For certain reasons on load issues we are now moving to a dedicated 3040 cluster, so, with this in background the plan in brief is to

- make sure that disks are marked and ownerships are transferred to the new filer and get online of aggregates and then move vfiler root folder and recreate it (unless there is a way to move even the root vol of vfiler to same aggregates and then just move those two aggregates).

In this scenario, I'm just worried about VFILER and other related stuff like CIFS shares,quota etc etc..endless. I need some pointers on those.

Waiting for reply

Regards

Rajesh

scottgelb
4,988 Views

Got it... to confirm, the FAS3070 will keep running and you are going to remove individual disks that comprise of complete aggregates to the FAS3040. In that case, it will be some manual work... here's what I'd look at (and test in the sim or a test aggregate first).. this i moving only the aggregate(s) by taking the disk drives between systems.. assuming the entire vfiler resources are in the disks removed.. this isn't a guaranteed or tested plan... but one that will work with some tweaking.. definitely test it on non-live data first and open a support case with the GSC at 888-4-netapp too.. Also, if you can spare the disk, I think a vfiler migrate is a better now downtime way to move the vfiler from the 3070 to the 3040 using snapmirror... if you have the spares anyway to refill the shelves on the 3070, I'd definitely do a simple migrate instead of partial disk swap. A full head swap would be easier than what you are doing... also, there is no easy way to move a vfiler root volume.. there is no "vol options volname root" for vfilers...the only way to move the rootvol is to copy the contents of /etc and make sure the new volume you are using is the same name...need to rename the existing after copying..

FAS3070

  • do a FULL backup of everything (CYA)...disk to tape, disk to disk (mirror/vault), whatever..
  • send an autosupport to let netapp know you are starting..
  • create a snapshot on all volumes migrating...
  • create an aggregate snapshot for the aggregates moving..
  • vfiler status -a # get exact setup of the vfiler
  • ipspace list # show ipspaces
  • destroy the vfiler.. don't worry, this puts all volumes back into vfiler0.... destroy it so the 3070 doesn't still think it owns it with missing disks later..
  • use aggr status -r and fcstat device_map to identify all drives in the aggregate(s).. also I'd use "priv set advanced ; led_on x.xx ; priv set" to blink the light on the drives for identification
  • offline the aggregate(s)...make sure ALL aggregates have ALL volumes in the vfiler including the vfiler rootvol... also, if ANY OTHER DATA not in the vfiler is in the aggregate (any volume not belonging to the vfiler) then you have an issue and can't migrate the entire aggregate most likely..
  • disk remove_ownership on each drive (easier to do no then disk assign on target)
  • Before removing the disks... make sure that you are never removing both bays 0 and 1 on a shelf (far right 2 slots) or you will stop enclosure services...only remove 1 at a time and replace anyway but never pull those 2 at the same time..
  • disk remove on each disk in the aggregate
  • remove each disk...1 at a time... replace with a spare or filler so there isn't a hole left...

FAS3040

  • insert drives 1 at a time...until all added..
  • disk assign all # assigns all unsassigned disks to the 3040...
  • create the ipspace if not "default-ipspace"
  • add the interface(s) you will use into the ipspace (if not default)
  • aggr online aggrname(s) .. online the aggregates...
  • aggr status and vol status (aggr show_space to see all) and make sure all volumes are online...
  • vfiler create vfilername -r rootvolname -b oldvfilername # using the "-r" alllows you to recreate the vfiler only by specifying the rootvol name... if all volumes and rootvol are on the 3040, this will work...
  • vfiler run vfilername setup -e interface:ip:netmask # make sure to setup the IP since it won't be bound now.. NOTE: this wacks several files including hosts, exports, resolv.conf, options dns... you need to reset/fix those... it creates bak files of each one ...
  • fix options dns., and search for all bak files to copy back...
  • vfiler run vfilername route add default x.x.x.x 1 # if needed...add a route for the vfiler...then below add the command to the rc file..
  • update the /etc/rc of the 3040... from the vfiler0 /etc ... for the "vfiler run vfilername route add default" if a different ipspace and you need a route added for the vfiler..
  • CONFIRM that the vfiler is bound to IP with vfiler status -a... make sure it can ping... check shares, exports, cifs domaininfo for domain credentials, etc.. check quotas...

Again... consider the simpler vfiler migrate with snapmirror and not go through all this manual work... but it is feasible and can work to swap only the drives in the aggregates for the vfiler... but see caveats above... it's a good science project but one I'd even loan a customer/rent/sell disks for to make the migrate simpler.. A FULL head swap would be much easier too... the hard part of this is moving only some aggregates and not missing any thing... then deleting and recreating the vfiler in vfiler0 from 3070 to 3040...

Again..it can work but I don't recommend this unless you have no way of getting disks to migrate too... let us know how it goes and again... test it first on non-prod data or simulator and open a pre-emptive case to help with this..

rajesh
4,988 Views

Thanks Scott, this helps. Also we can't use snapmirror as we don’t have shelves on 3040. So we have to live with it.

Quick question on disk ownership removal..

Since this is a cluster on current filer and both filers have these disks in their registry (I'm assuming it), should we not be removing ownership on both ?

Just think we are moving an aggregate for a second on a cluster filer?

Cheers

Rajesh

scottgelb
4,988 Views

So you are going to move full shelves from the 3070 to the 3040? If so, remember, you can't hot remove shelves with ONTAP.. You will have to power off the 3070.

Disk ownership is ONLY to ONE node, not both... one or the other. So removing ownership removes the ownership for that one node and places the disk in the unassigned pool. Then assign the pool on the 3040 to the correct node.

If you can borrow disk shelves and a temp snapmirror license, that would still be best 🙂

rajesh
4,988 Views

Yes, we will be taking down time for current primary filer.As far as snapmirror goes, we don’t have luxury of more shelves so..its ruled out :(..

Thanks mate for all the knowledge sharing till now, I'll keep in touch with you for how it goes.

Cheers

Rajesh

Public