ONTAP Discussions
ONTAP Discussions
Hi All,
I'm trying to put a plan for aggr/vfiler move to a new filer.
senario;-
I have a 3070 filer, has 1 vfiler and has 2 aggr inside it with FC disks.
Plan to move to a 3040 Cluster filer with 2 aggregates pointing to 85disks, now with 3040 head, how can i make sure that vfiler is migrated with its quotas and cifs shares are
intact...any pointers to such process already done is of great help.
i have read vfiler migration..but that does not talk with this context...
regards
rajesh
vfiler migrate will take care of cifs shares and quotas. Since the vfiler has it's own root volume, all of those settings migrate with it. The key things you must do on the destination system prior to migrate... assuming you are going to use snapmirror (vfiler migrate uses snapmirror unless you specify "-m nocopy" which uses the snapmover feature...all disks must be viewable to both controllers if this is the case since a disk reassign is used instead of snapmirror...this won't be the case from a 3070 to a 3040). You will not be migrating aggregates...migrate with snapmirror works at the volume (flexvol or qtree level) by migrating all volumes and the vfiler itself... you can have any mix of aggregate numbers on the source and dest as long as the volumes match up with the same name and size for snapmirror...you can even use different disk types (fc to sata or vice-versa)...
1) create all volumes of the same name and same size (or larger) on target
2) ensure all licenses are the same on the target (for example...if cifs is licensed on the source, it must be licensed on the target)...this is from vfiler0 context since the vfiler itself runs from licenses in vfiler0. Make sure you have a snapmirror license too..
3) make sure the network can handle the vfiler move (vlans, routing, etc)...
Do a "vfiler migrate start" on the target 3040... then monitor status with snapmirror status and "vfiler migrate status"... for cutover, use "vfiler migrate complete" which will stop the source vfiler, do final mirror updates, then activate the vfiler on the target... make sure lag times are low on mirrors so the cutover completes in a timely manner before timeouts... cifs clients will have to reconnect (very similar to a cluster failover). Also, the vol options fs_size_fixed stays on so you should turn that off for each volume on the target (vols that were in the vfiler). The vfiler migrate commands require rsh enabled on source and dest and you specify vfilername@sourceIP from the target system...sourceIP of vfiler0...
I would test this in the simulator or a test vfiler first to get comfortable with it and prove concept... it works great but you have to be careful to make sure all your infrastructure can handle the vfiler move...
Thanks Scott for coming back, there is a slight change here.
Here what we are doing is not migrate but Move (sorry I should have clarified) we have two aggregates which are having x number of volumes and a VFILER holds this.
For certain reasons on load issues we are now moving to a dedicated 3040 cluster, so, with this in background the plan in brief is to
- make sure that disks are marked and ownerships are transferred to the new filer and get online of aggregates and then move vfiler root folder and recreate it (unless there is a way to move even the root vol of vfiler to same aggregates and then just move those two aggregates).
In this scenario, I'm just worried about VFILER and other related stuff like CIFS shares,quota etc etc..endless. I need some pointers on those.
Waiting for reply
Regards
Rajesh
Got it... to confirm, the FAS3070 will keep running and you are going to remove individual disks that comprise of complete aggregates to the FAS3040. In that case, it will be some manual work... here's what I'd look at (and test in the sim or a test aggregate first).. this i moving only the aggregate(s) by taking the disk drives between systems.. assuming the entire vfiler resources are in the disks removed.. this isn't a guaranteed or tested plan... but one that will work with some tweaking.. definitely test it on non-live data first and open a support case with the GSC at 888-4-netapp too.. Also, if you can spare the disk, I think a vfiler migrate is a better now downtime way to move the vfiler from the 3070 to the 3040 using snapmirror... if you have the spares anyway to refill the shelves on the 3070, I'd definitely do a simple migrate instead of partial disk swap. A full head swap would be easier than what you are doing... also, there is no easy way to move a vfiler root volume.. there is no "vol options volname root" for vfilers...the only way to move the rootvol is to copy the contents of /etc and make sure the new volume you are using is the same name...need to rename the existing after copying..
FAS3070
FAS3040
Again... consider the simpler vfiler migrate with snapmirror and not go through all this manual work... but it is feasible and can work to swap only the drives in the aggregates for the vfiler... but see caveats above... it's a good science project but one I'd even loan a customer/rent/sell disks for to make the migrate simpler.. A FULL head swap would be much easier too... the hard part of this is moving only some aggregates and not missing any thing... then deleting and recreating the vfiler in vfiler0 from 3070 to 3040...
Again..it can work but I don't recommend this unless you have no way of getting disks to migrate too... let us know how it goes and again... test it first on non-prod data or simulator and open a pre-emptive case to help with this..
Thanks Scott, this helps. Also we can't use snapmirror as we don’t have shelves on 3040. So we have to live with it.
Quick question on disk ownership removal..
Since this is a cluster on current filer and both filers have these disks in their registry (I'm assuming it), should we not be removing ownership on both ?
Just think we are moving an aggregate for a second on a cluster filer?
Cheers
Rajesh
So you are going to move full shelves from the 3070 to the 3040? If so, remember, you can't hot remove shelves with ONTAP.. You will have to power off the 3070.
Disk ownership is ONLY to ONE node, not both... one or the other. So removing ownership removes the ownership for that one node and places the disk in the unassigned pool. Then assign the pool on the 3040 to the correct node.
If you can borrow disk shelves and a temp snapmirror license, that would still be best 🙂
Yes, we will be taking down time for current primary filer.As far as snapmirror goes, we don’t have luxury of more shelves so..its ruled out :(..
Thanks mate for all the knowledge sharing till now, I'll keep in touch with you for how it goes.
Cheers
Rajesh