Network and Storage Protocols

NFS Exporting

paul_wolf
2,897 Views

If this is discussed somewhere else, I can't seem to find it.

Looking for a way to export a volume and set an FSID 'alias' so that servers can mount the path using the same FSID.  We have a quite a lot of volume migrations to do where we have to 'cutover' the volumes from one aggregate to a new aggregate so there will be a brief interruption in communication and we are also moving vFiler root volumes which requires destroying the vFiler and recreating it.  IN both cases this causes the mount to go stale and remounting corrects this issue but we have many hundreds of servers mounting these exports and rather than driving our Open Systems team completely around the bend, I've been looking for a way to present each mount using a unique FSID so moving forward each host will be able to mount the export using that FSID and not have to re-negotiate one every time the mount is performed.

I hope I explained it correctly.

Thanks

Paul

1 REPLY 1

scottgelb
2,897 Views

There is a snapmirror migrate command (not supported for vfiler root though and not often used) that will move the volume fsid with the completion of the process so mounts don't go stale.  Caveats are that it does not update the exports file so you have to quickly edit that then also when it does the cutover of the volume it turns off nfs then turns it back on for the entire controller...even though timeouts can handle it, I don't like doing it since all mounts are affected for the one mount/mirror.  We have used it successfully to move NFS volumes without clients going stale, but it takes some testing and nail biting while it happens.

A more non-disruptive (but brute force and temporary space hog) method to move vFiler volumes within the same controller is to move them to another controller, then back again.  If a different cluster and on ONTAP 7.3 or 8.1 (not 8.0) you can use Data Motion for vFilers and migrate the vFiler to another node keeping all IP and FSIDs and mounts intact....then move it back again to the aggregates you want (even if some were the same as before).  The result is more like an NDU upgrade or cluster failover but at the cost of having disk and another controller for that vFiler temporarily until you move it back.  If you can not run Data Motion, you can run "vfiler migrate" manually from the command line and do the same thing...just no guaranteed 120 second migration like data motion but with mirrors closely updated the final mirror can typically finish in that time (but no guarantee so Data Motion to migrate is a better option).

Public