ONTAP Discussions

FAS 3240 Nodes to be unjoined from cluster.

sroy
2,985 Views

Hello, 

I have recently migrated the data off of the FAS-3240 nodes and I want to unjoin them from the cluster.  This is the first time for me to run an unjoin. I need to eject these to be able to upgrade to ONTAP 9.x

There are still a pair of 3250s and 8040s in the cluster. Eplison is not on 3240 nodes. 

 

I ran the unjoin command to see what items were still depended upon. 

 

[Job 70997] Job is queued: Cluster unjoin of Node:CGYNNNT1 with UUID:a89c7dad-e66f-11e5-a67b-d97b5b9c65c2.

Error: command failed: [Job 70997] Job failed:

Cluster unjoin only works for nodes not in failover configuration. Node CGYNNNT1 is SFO enabled with partner node CGYNNNT2.

Node "CGYNNNT1" is the home node for one or more data logical interfaces. Either move or delete them from the node and retry the operation. The logical interfaces are: <CGYNNAT1,
cifs_CGYNNAT1_lif_1>, <CGYNNAT1, iscsi_CGYNNAT1_lif_1>, <CGYNNAT1, iscsi_CGYNNAT1_lif_4>, <CGYNNAT1, mgmt_CGYNNAT1_lif_1>, <CGYNNAT1, nfs_CGYNNAT1_lif_1>, <CGYNNAT1,
nfs_CGYNNAT1_lif_2>, <CGYNNAT2, iscsi_CGYNNAT2_lif_2>, <CGYNNAT2, iscsi_CGYNNAT2_lif_3>, <CGYNNAT3, cifs_CGYNNAT3_lif_1>, <CGYNNAT3, nfs_CGYNNAT3_lif_1>.

Node "CGYNNNT1" is the current node for one or more data logical interfaces. Either move or delete them from the node and retry the operation. The logical interfaces are:
<CGYNNAT1, cifs_CGYNNAT1_lif_1>, <CGYNNAT1, iscsi_CGYNNAT1_lif_1>, <CGYNNAT1, iscsi_CGYNNAT1_lif_4>, <CGYNNAT1, mgmt_CGYNNAT1_lif_1>, <CGYNNAT1, nfs_CGYNNAT1_lif_1>, <CGYNNAT1,
nfs_CGYNNAT1_lif_2>, <CGYNNAT2, iscsi_CGYNNAT2_lif_2>, <CGYNNAT2, iscsi_CGYNNAT2_lif_3>, <CGYNNAT3, cifs_CGYNNAT3_lif_1>, <CGYNNAT3, nfs_CGYNNAT3_lif_1>.

Node "CGYNNNT1" has 2 volumes. Please either move or delete them from the node before unjoining. The volumes are: <CGYNNAT1, CGYNNAT1_root>, <CGYNNAT3, CGYNNAT3_root>

 

Can I safely remove all of the existing lifs including the mgmt lif? 

I am managing the cluster via 192.168.250.110 and the CGYNNAT1, mgmt_CGYNNAT1_lif_1 uses 192.168.250.147 for example. 

 

Can I also remove the root volumes on the FAS-3240 nodes as well and still be able to complete the unjoin procedure? 

1 ACCEPTED SOLUTION

SpindleNinja
2,978 Views

The formatting is a little skewed,  but it looks like it's crabbing about  lifs and some volumes.  You'll have to remove all data and intercluster lifs.  As well as disable failover.    

 

No need to worry about cleaning up the root aggrs,  aggrs, mgmt/cluster LIFs.   You will need to remove (or move) all data volumes and SVM root volumes.  

 

Check out these URLs. 

 

https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-sag%2FGUID-6731B7F7-0C48-4474-A67B-E1F1CCBA77A6.html 

 

https://www.sysadmintutorials.com/tutorials/netapp/netapp-clustered-ontap/netapp-clustered-ontap-node-removal/

 

 

 

View solution in original post

3 REPLIES 3

SpindleNinja
2,979 Views

The formatting is a little skewed,  but it looks like it's crabbing about  lifs and some volumes.  You'll have to remove all data and intercluster lifs.  As well as disable failover.    

 

No need to worry about cleaning up the root aggrs,  aggrs, mgmt/cluster LIFs.   You will need to remove (or move) all data volumes and SVM root volumes.  

 

Check out these URLs. 

 

https://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.dot-cm-sag%2FGUID-6731B7F7-0C48-4474-A67B-E1F1CCBA77A6.html 

 

https://www.sysadmintutorials.com/tutorials/netapp/netapp-clustered-ontap/netapp-clustered-ontap-node-removal/

 

 

 

aborzenkov
2,930 Views

@sroy wrote:

Can I safely remove all of the existing lifs including the mgmt lif? 


Where do you see it complaining about management LIF? It complains about data LIFs belonging to data SVMs.

 


@sroy wrote:

Can I also remove the root volumes on the FAS-3240 nodes as well and still be able to complete the unjoin procedure? 


Again - it complains about SVM root volumes. You obviously cannot remove SVM root volume without losing access to other SVM data volumes (I am not sure it is even possible to remove). What you can do is to move volume to another aggregate on another node.

 

You cannot remove node root volume.

TMACMD
2,865 Views

So, start by seeing which lifs are on your nodes to be evicted:

"net int show -curr-node <fas3240-01>|<fas3240-02> -fields role,curr-node,home-node" 

 

From your output, it looks like the following LIFs need to be modified:

<CGYNNAT1, cifs_CGYNNAT1_lif_1>

<CGYNNAT1, iscsi_CGYNNAT1_lif_1>

<CGYNNAT1, iscsi_CGYNNAT1_lif_4>

<CGYNNAT1, mgmt_CGYNNAT1_lif_1>

<CGYNNAT1, nfs_CGYNNAT1_lif_1>

<CGYNNAT1,nfs_CGYNNAT1_lif_2>

<CGYNNAT2, iscsi_CGYNNAT2_lif_2>

<CGYNNAT2, iscsi_CGYNNAT2_lif_3>

<CGYNNAT3, cifs_CGYNNAT3_lif_1>

<CGYNNAT3, nfs_CGYNNAT3_lif_1>

 

If there are any LIFS of role data on the nodes, they must be migrated *and* moved (or deleted!)

You only want to delete the LIFs if you know that no client is or ever will access again.

If you have any non-admin MGMT LIFS in other SVMs, they should be migrated and moved.

 

These two modify and then revert the LIF (non-iscsi!)

"net int modify -vserver <svm> -lif <lif> -home-node <new-node> -home-port <new-port>"

"net int revert  vserver <svm> -lif <lif> "

 

For iSCSI, it is a little different (knock the LIF down, modify it, bring it back up):

"set adv ; net int modify -vserver <svm> -lif <iscsi_lif> -up-admin false"

"net int modify -vserver <svm> -lif <lif> -home-node <new-node> -home-port <new-port>"

"net int modify -vserver <svm> -lif <iscsi_lif> -up-admin true"

 

Additionally, if there are any non-root volumes (anything but vol0 for the node), they must be moved (vol move) or deleted. Sometimes it is good to just delete the aggregates on the node also.

 

Disable SFO for the nodes:

"storage failover modify -enabled false -node <fas3240-01>,<fas3240-02>"

 

Try unjoining again.

 

Public