vfiler migration and data motion move an entire vfiler and handle the mirrors in the vfiler process. So if 50 volumes in the vfiler, all is handled by data motion in the NMC or the vfiler migrate command. The requirements are in the data motion guide with limits on volume counts (depends on controller type). The target fas needs to have the exact same volume names and same ipspaces for the vfiler then it migrates.
3 way isn't supported... but for vfiler dr, you could have 2 targets.. .A can vfiler dr to both B and C. For migrate and data motion it is a one way thing... if you want to create multiple vfiler targets then I would use vfiler dr and use separate IP addresses on activation if that is the goal.
Thank you... Roger and I have been giving the class the last 4 years. I finished updating it for ONTAP 8.1. We will (soon) likely be using a vfiler to vserver migration tool as c-mode becomes more prominent. They are almost a direct correlation but some differences... no need to create ipspaces since that is inherent in a vserver.
I'd like to know if any of you already have experienced this bug and if you find a way to mitigate it.
Il the Scott's Lab, in the section 5 (Moving the vfiler root volume), I've seen that you use vol copy to do it. Could snapmirror be used instead or you were using vol copy precisely to avoid using snapmirror ?
You could vfiler migrate or data motion to a different controller, then migrate or motion back to the same controller… if room on another controller that is usually easier… an additional migration, but when automated makes it easier than in the same controller all at once.
The lab uses vol copy but could have been snapmirror. The bug listed is for “snapmirror migrate” specifically when using the migrate feature… If you use snapmirror, don’t use migrate to cut over… destroy the vfiler, quiesce/break the mirror the vfiler create –r… it is then not using the migrate function (not often used)… this is not recommended to move a vfiler root… either use vol copy or volume snapmirror, then plan an outage to recreate the vfiler without snapmirror migrate (break mirror instead and recreate). The motion to another controller and back would be more seamless but more movement of data.
Good idea to use vfiler migrate or data motion to a different controller, then back. But there are a couple of TB in volumes in production and I'm not sure the client will love this idea.
I've already planned to use the "vfiler create -r..." method and the client is aware of the downtime. In fact, you reassure me saying that it's only the "migrate" portion of snapmirror which cause the bug (I agree, the snapmirror migrate function is not used very often). I'm going to talk to the client about data motion to a different controller the back but I'm pretty sure that we will stick to the original plan.
Thanks for your help.
PS: Too bad that I'm not the originator of this post so I couldn't tag your answer as correct.
It would be nice if the vfiler move/migrate to work this way, we have a controller failover situation, can't perform a giveback opearion due to large number of CIFS users connected, CIFS shares are on vfilers running. We have other SAN luns as well on these controllers. Because of active CIFS sessions we can't perform a giveback and hence all the other services(SAN) are impacted which is not really our organisation wanted. It would be nice if there is an option just to change the ownership of the vfiler from controllerA to controllerB( in the takeover scenario), that way all the other SAN volumes will be on ControllerA still and then perform a cfgive back operation. Later at out of business hours move/migrate the vfiler back to the original contoller