VMware Solutions Discussions

Move NFS Volumes nondisruptively in 7-Mode

rojas
14,854 Views

Hi All,

I have a customer using NetApp DOT 8.0.2 for VMWare Server v5 thru NFS (VSC 2.1.1)

He wants to move volumes from SAS aggregates to SATA aggregates nondisruptivily

DataMotion for Volumes doesnt work for it. Is there other way to do that not using VMware Storage Motion?

Thanks,

Mauricio

1 ACCEPTED SOLUTION

aborzenkov
14,854 Views

It may be possible using SnapMirror migrate feature. But I never tried it myself so cannot comment on how feasible it is.

View solution in original post

14 REPLIES 14

aborzenkov
14,855 Views

It may be possible using SnapMirror migrate feature. But I never tried it myself so cannot comment on how feasible it is.

scottgelb
14,814 Views

We did it at a customer successfully but I can't recommend using it.  SnapMirror Migrate does move the volume and all file handles, but it does not update exports...so once the migrate completed we quickly had to update exports and run exportfs to keep the mount.  Data Motion for volumes in c-mode handles nas and san and that looks like the direction to get full support for migrating any volume... with 7-mode LUN volumes only.  SnapMirror Migrate almost seems like it was never finished in that it doesn't update shares and exports after migration.

pascalduk
14,814 Views

I have also used snapmirror migrate a few times, but only to prevent remounting filesystems on several hundreds of nfs clients. As Scott already mentioned with vmware you have to be really quick with re-exporting the filesystem or else your vm's will crash. With e.g. solaris or linux the nfs client is far more resilient and does not care about waiting for several minutes.

pascalduk
14,814 Views

An issue with snapmirror migrate you need to be aware of is that it will disable NFS for the WHOLE filer temporarily, not only for the volume you are migrating!

Of course you won't need to re-export any other volumes, but any other nfs clients than vmware will also notice a relative long delay in response from the filer.

rajdeepsengupta
14,814 Views

When you mentioned that the NFS will get disabled for the whole filer, do you mean that Admin has to do some thing or the system  takes care of enabling the NFS automatically. And what is the typical delay between NFS disable and NFS enable phase?

scottgelb
14,814 Views

it turns back on automatically, but the time is dependent on the mirror cutover of the snapmirror migrate volume.  The migrated volume takes the vol fsid and file handles so clients don't know it moved...but it does not update the export the volume to the new location so that needs to be updated quickly right after the migrate finishes.  It almost seems like snapmirror migrate was a partial vol move (most pieces there) several years ago... clients should handle the timeout but it isn't great that all volumes get affected since the protocol is stopped for the one migrating volume...then the single export update has to be updated as well so a couple of gotchas with this method. 

rajdeepsengupta
14,814 Views

Scott,

I have a small confusion.

As per the steps I understand we need to do the following..

vol create <new_vol> <new_aggr> <size>

vol restrict <new_vol>

snapmirror initialize –S <filer>:<old_vol> <filer>:<new_vol>

check the snapmirror status, and then do update just nbefore the migrate..

Finally..

snapmirror migrate <old_vol> <new_vol>

So my confusion is, when I created the new_vol, it automatically got exported in NFS. Now once I complete the snap migrate, then why I need to export it again.

Am I missing something?

Thanks for your help.

scottgelb
14,813 Views

The export of the old and new name are different. The client has the old name mounted so the vol name and mount are the issue. I wouldn't call it non disruptive depending on timing.

Sent from my iPhone 4S

rajdeepsengupta
14,813 Views

I agree with you.

But in a scenario where it is totally linux based environment accessing multiple NFS shares on Netapp, we can use it. As you know few minutes of NFS down won't affect the linux clients, they just hang for sometime, and then continue. Also even if the mount points still shown as old after the migrate, but that won't affect the runs going on.

So I call it pseudo-nondisruptive 🙂 for some complete NFS and linux environment. I donot know how a VMWare environment using NFS will behave.

As you know I was looking for smooth NFS volumen movement within a filer, and datamotion on 8.0 was no use for me, but thanks o you, atleast I can use the above method. I will also summarize in my original post once I do some good testing.

Thanks

pascalduk
10,374 Views

A general warning for anyone planning on using snapmirror migrate between systems running ONTAP 8.0.x and 8.1 7-mode systems: it does not work!

The snapmirror migrate aborts because it can't complete the file handle transfer to the new system, this is a know bug (547843). Luckily I found this one while performing my migration test and not during the actual controller hange (and migration of data to new disks). Only workaround at the moment was to upgrade the source controllers to ONTAP 8.1 and then execute the snapmirror migrates.

scottgelb
10,374 Views

Really good info. Is the use case for most to use it intra-controller though? That way the client target IP/hostname doesn’t change…but could see as a use to move between controllers too.

pascalduk
10,374 Views

I agree that in most cases it will be intra-controller migrations and you will not run into this issue.

My case I was replacing the controllers/some of the disk shelves and the some disk shelves including data need to be reused. I had the new controllers up and running with the new disk shelves and a temporary host name/ip address. For the data on the disk shelves that had to be replaced I performed the base snapmirror transfer to the new controllers/disks.

During the change window, I performed the snapmirror migrate to the new controllers., shutdown the old controllers, connected the to be reused disk shelves including the data on them to the new controllers. Then the new controllers were renamed to the original controllers including IP addresses. Worked really well and this way we did not need to reboot 500 NFS clients

rajdeepsengupta
10,374 Views

Thats nice.. I was in the assumption that migrate can do intra-controller only, not aware of the fact that it is smart enough to handle NFSID etc even across controller.

Thanks for sharing this, and offcourse the bug related to 8.0.X to 8.1

BTW, I know there are some good features in 8.1 compared to 8.0.X, but is there any performance benefit as well?

TOPHAN
9,193 Views

I also did many times NFS volume migration in the linux environment. Several times there was no issue as immediately updated the export but few times it gives error on the client PC due to stale file system need to unmount and remount it.

Public