Network and Storage Protocols

Changing IP's for FAS 2520

NetApp93
3,096 Views

Hello All,

 

My section recently conducted a tech refresh for one of our stacks, including upgrading our FAS 2520 to an AFF-A220. We kept the FAS 2520 in the rack and hooked up to our backend switch so that we could vMotion all of our VM's off of it to our new datastore, the AFF-A220. We may want to use it for something in the future, but nothing important is currently being used on it right now. We want to change the IP's for the 5 items that are on our old FAS 2520:

 

1. Cluster Management IP

2. Node 1 IP

3. Node 2 IP 

4. "vm_NFS_lif_1"

5. "vm_NFS_lif_mgmt"

 

From what I'm getting from this article, the idea is to create a brand new LIF with the new IP on it, then transfer services to those new LIF's. However, this article seems to be a discussion of how to do this process with no downtime, which doesn't seem to be necessary for my use case. In the System Manager GUI for my FAS 2520, I'm able to go under Network > Overview and Edit all of those LIF's, changing their IP's. This seems a lot simpler than creating new LIF's. Would this do everything I needed it to do? Or would doing so break the datastore?  I apologize in advance if these are dumb questions, I'm quite new to NetApp and storage engineering in general. 

 

1 ACCEPTED SOLUTION

Ontapforrum
3,005 Views

Ideally,  first option - un-mount the data store, change the IP's in the System Manager GUI, then remount with changed IP.

View solution in original post

7 REPLIES 7

Ontapforrum
3,059 Views

For mgmt IPs, no issues. You can change it by editing/modifying via CLI. However, for NFS data LIF/IP_change, you can change it in-place but this knowledge is not transferred to 'VMware datastore host'. Hence, you will need to un-mount & re-mount NFS datastore from VMware side. To answer your query - Yes, it will break/disconnect the datastore. Also, I think if SVM NFS LIF Mgmt IP  is changed, then you may have to rediscover it in the snapcenter (if you use this tool).

NetApp93
3,027 Views

Would I need to un-mount the data store, change the IP's in the System Manager GUI, then remount? Or should we unmount/remount after we do the re-IP?

Ontapforrum
3,006 Views

Ideally,  first option - un-mount the data store, change the IP's in the System Manager GUI, then remount with changed IP.

NetApp93
2,788 Views

So we did as you said - we unmounted the data store from the ESXI Hosts it was connected to, then I tried to change the IP of the cluster management. It seemed as if this went through successfully, but I tried to refresh the page and I'm hit with and error. This is what I expected as the cluster management IP changed so I would have to change what address I'm going to in the browser to get to the GUI, although now I can't get to either the new or the old IP via System Manager. I also tried to Putty into it via SSH and it can't get to either new or old IP. Tried pinging new and old IP - nothing for either. I can ping the other IP's I listed above along with other IP's in the same subnet of my new management IP. However, when I log onto the GUI for one of my two nodes I can see that the cluster IP successfully changed to my new IP. Any clue what's going on/how to fix?

Ontapforrum
2,777 Views

Clust_mgmt LIF may have moved to non-default node/port.

 

Check the home-port, whether it is moved?

::> net int show -fields home-node, home-port, curr-node, curr-port

 

If the result differ between home & current, then simply move it back using the following cmd:

 

::> net int revert -vserver <vservername> -lif <lif name>

 

That should fix the issue.

NetApp93
2,776 Views

So we actually figured that part out - our switch had a misconfiguration, didn't have the new vlan added to the trunk port. Pretty sure we're just about done, I was able to get back onto the GUI and change the other IP's. We've remounted the datastore to the ESXi hosts successfully, but we wanted to move one VM over to it to test it out. Unfortunately the migration failed, without much of an explanation why. We tried rebooted each node on the cluster but that didn't seem to do the trick. Still trying to troubleshoot why vCenter wants to give us trouble. I appreciate all of your help so far! Also checked the home node/port node, all is as it should be. 

NetApp93
2,769 Views

Disregard, turns out need to up my networking game! Turns out it was the MTU. Needed to set the correct MTU on the switch interfaces. VM moved over to the datastore successfully after that. Thanks so much for your help @Ontapforrum !

Public