The transition to NetApp MS Azure AD B2C is complete. If you missed the pre-registration, you will be invited to register at next log in.Please note that access to your NetApp data may take up to 1 hour.To learn more, read the FAQ and watch the video.Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.
we have recently purchased new Netapp 2720 ontap (version 9) as a replacement for 2040 ontap (version 8). A netapp third party verndor assists us in datamigration task using 7MTT tool. We have already done with Data migration phase. And now in a position to plan Pre-Cutover(test phase) and Cutover.
During Precheck, 7MTT gave error as it has detected some of the volumes needs to be shared explicitly READ-Only in order to be able to funtion in new netapp. Which is surprising as the underlying Qtrees have already proper NFS entries in /etc/exports. But I suppose thats how it is.
1. So we have to manually (hate this), enter the rule to allow Read-only access to each individual volume in /etc/exports. The problem is we have n number of clients which access these shares. And so far we have been using only Oncommand GUI interface in order to manage NFS. But I suppose GUI can not be using in case I have to enter lets say 50+ IP-Addresses in NFS-Rule. The GUI does not (atleast I think) allow multiple IPAddresses to enter at once.
So my question is, is there any way I can use GUI to enter lets say 50 IPAddresses at once. I already tried entering multiple addresses but it does not accept it.
2. We also have few ESXi Hosts running some VMs on it. These Esxi-Hosts use Datastores from Netapp via NFS. My question is, when we do actual cut-over, we shall bring down all VMs first. Unmount datastores from old Netapp. Readd new Datastores
using new Netapp-Address but I suppose using existing Datastores Names (??) and we should be able to start the VMs just like that. Atleast we have been told so. But will it be so straightforward? Or do we have to take care of somethings more?
I guess datastore names have to be kept same as the individual VMs access their disks stored on Datastores using these names. For example, in VM-Settings, it is stated that (Please see the attached figure ).
These are the important queries I have. we shall be greatful if you can give us good pointers.
#2 It is not as easy as mapping datastore and start the VM. There will be more things you need to do on the vsphere side (and to each VM) to make this work. The best way to do a datastore move is to provision a new NFS datastore and do a storage only (datastore) migration. This way, you do not need to bring down all your VMs.
1. Thanks for the commands and explanation. I shall try this out.
2. What do you mean by it will not be so straightforward? We have been told that, after complete data migration we just have to NFS add the datastores which now have new IPAddress. And we can start VMs. What is it that one should do extra?
Which is the reason I posted this question.
Secondly what do you mean by "storage only (datastore) migration" ? If you mean VMotion migration then I am afraid we do not have one 😞