2011-10-04 08:14 AM
I have a customer using NetApp DOT 8.0.2 for VMWare Server v5 thru NFS (VSC 2.1.1)
He wants to move volumes from SAS aggregates to SATA aggregates nondisruptivily
DataMotion for Volumes doesnt work for it. Is there other way to do that not using VMware Storage Motion?
Solved! SEE THE SOLUTION
2011-10-04 09:51 AM
We did it at a customer successfully but I can't recommend using it. SnapMirror Migrate does move the volume and all file handles, but it does not update exports...so once the migrate completed we quickly had to update exports and run exportfs to keep the mount. Data Motion for volumes in c-mode handles nas and san and that looks like the direction to get full support for migrating any volume... with 7-mode LUN volumes only. SnapMirror Migrate almost seems like it was never finished in that it doesn't update shares and exports after migration.
2011-10-05 09:02 AM
I have also used snapmirror migrate a few times, but only to prevent remounting filesystems on several hundreds of nfs clients. As Scott already mentioned with vmware you have to be really quick with re-exporting the filesystem or else your vm's will crash. With e.g. solaris or linux the nfs client is far more resilient and does not care about waiting for several minutes.
2011-10-06 05:17 AM
An issue with snapmirror migrate you need to be aware of is that it will disable NFS for the WHOLE filer temporarily, not only for the volume you are migrating!
Of course you won't need to re-export any other volumes, but any other nfs clients than vmware will also notice a relative long delay in response from the filer.
2012-05-30 07:32 AM
When you mentioned that the NFS will get disabled for the whole filer, do you mean that Admin has to do some thing or the system takes care of enabling the NFS automatically. And what is the typical delay between NFS disable and NFS enable phase?
2012-05-30 07:37 AM
it turns back on automatically, but the time is dependent on the mirror cutover of the snapmirror migrate volume. The migrated volume takes the vol fsid and file handles so clients don't know it moved...but it does not update the export the volume to the new location so that needs to be updated quickly right after the migrate finishes. It almost seems like snapmirror migrate was a partial vol move (most pieces there) several years ago... clients should handle the timeout but it isn't great that all volumes get affected since the protocol is stopped for the one migrating volume...then the single export update has to be updated as well so a couple of gotchas with this method.
2012-05-30 11:40 PM
I have a small confusion.
As per the steps I understand we need to do the following..
vol create <new_vol> <new_aggr> <size>
vol restrict <new_vol>
snapmirror initialize –S <filer>:<old_vol> <filer>:<new_vol>
check the snapmirror status, and then do update just nbefore the migrate..
snapmirror migrate <old_vol> <new_vol>
So my confusion is, when I created the new_vol, it automatically got exported in NFS. Now once I complete the snap migrate, then why I need to export it again.
Am I missing something?
Thanks for your help.
2012-05-31 12:07 AM
The export of the old and new name are different. The client has the old name mounted so the vol name and mount are the issue. I wouldn't call it non disruptive depending on timing.
Sent from my iPhone 4S
2012-05-31 12:17 AM
I agree with you.
But in a scenario where it is totally linux based environment accessing multiple NFS shares on Netapp, we can use it. As you know few minutes of NFS down won't affect the linux clients, they just hang for sometime, and then continue. Also even if the mount points still shown as old after the migrate, but that won't affect the runs going on.
So I call it pseudo-nondisruptive :-) for some complete NFS and linux environment. I donot know how a VMWare environment using NFS will behave.
As you know I was looking for smooth NFS volumen movement within a filer, and datamotion on 8.0 was no use for me, but thanks o you, atleast I can use the above method. I will also summarize in my original post once I do some good testing.