ONTAP Discussions

Ontap NFS using Oncommand - Data migration

siemensocs
2,475 Views

Hello Community,

 

we have recently purchased new Netapp 2720 ontap (version 9) as a replacement for 2040 ontap (version 8). A netapp third party verndor assists us in datamigration task using 7MTT tool. We have already done with Data migration phase. And now in a position to plan Pre-Cutover(test phase) and Cutover.

During Precheck, 7MTT gave error as it has detected some of the volumes needs to be shared explicitly READ-Only in order to be able to funtion in new netapp. Which is surprising as the underlying Qtrees have already proper NFS entries in /etc/exports. But I suppose thats how it is.

 

1. So we have to manually (hate this), enter the rule to allow Read-only access to each individual volume in /etc/exports. The problem is we have n number of clients which access these shares. And so far we have been using only Oncommand GUI interface in order to manage NFS. But I suppose GUI can not be using in case I have to enter lets say 50+ IP-Addresses in NFS-Rule. The GUI does not (atleast I think) allow multiple IPAddresses to enter at once. 

So my question is, is there any way I can use GUI to enter lets say 50 IPAddresses at once. I already tried entering multiple addresses but it does not accept it.

 

2. We also have few ESXi Hosts running some VMs on it. These Esxi-Hosts use Datastores from Netapp via NFS. My question is, when we do actual cut-over, we shall bring down all VMs first. Unmount datastores from old Netapp. Readd new Datastores 

using new Netapp-Address but I suppose using existing Datastores Names (??)  and we should be able to start the VMs just like that. Atleast we have been told so. But will it be so straightforward? Or do we have to take care of somethings more?

I guess datastore names have to be kept same as the individual VMs access their disks stored on Datastores using these names.  For example, in VM-Settings, it is stated that (Please see the attached figure ).

 

These are the important queries I have. we shall be greatful if you can give us good pointers.

 

Thanx in advance.

 

Regards,

- Admin

 

3 REPLIES 3

cruxrealm
2,427 Views

#1:   You can only do this on CLI which is easy enough:

1. create the export policy:      export-policy create -vserver <SVM> -policyname <policyname>

example:   export-policy create -vserver myvsm -policyname myexpolicy
2. Add rules on the export policy:    export-policy rule create -vserver <SVM> -policyname <policyname> -clientmatch <ipaddress1,ipaddress2> -protocol <any|nfs|nfs3|nfs4> -rorule <rorule> -rwrule <rwrule>  -superuser <surule>

example:  export-policy rule create -vserver myvsm -policyname myexpolicy -clientmatch 10.10.10.10,10.10.10.11 -protocol nfs3 -rorule sys -rwrule sys  -superuser sys

3. Apply policy to the volume:   volume modify -vserver <SVM> -volume <volumename> -policy <policyname>
example: volume modify -vserver myvsm -volume myvolume -policy myexpolicy

#2  It is not as easy as mapping datastore and start the VM.   There will be more things you need to do on the vsphere side (and to each VM) to make this work.      The best way to do a datastore move is to provision a new NFS datastore and do a storage only (datastore) migration.   This way,  you do not need to bring down all your VMs.

siemensocs
2,414 Views

1. Thanks for the commands and explanation.  I shall try this out.

 

2.  What do you mean by it will not be so straightforward? We have been told that, after complete data migration we just have to NFS add the datastores which now have new IPAddress. And we can start VMs. What is it that one should do extra?

Which is the reason I posted this question.

Secondly what do you mean by  "storage only (datastore) migration" ? If you mean VMotion migration then I am afraid we do not have one   😞

 

Thanx in advance.

cruxrealm
2,412 Views

#2:    As long as NetApp can guide you on the NFS datastore cutover it should be good.  And you are correct,  the right term was Vmotion (storage only).    

Public