ONTAP Discussions

Migrating luns to new nodes

curmmit
2,537 Views

We've added 4 new nodes to our cluster (9.8P3) for a total of 8 nodes.  We're in the process of migrating from node 01-04 to nodes 05-08.  NAS (cifs, nfs) we can just move volumes (vol move) then migrate lifs to new nodes.  My question is with block (iscsi, fc) since we cannot migrate lifs we've created new lifs for new nodes on each svm supporting  iscsi and fc.  For each fc host we update zoning with new wwpn , add-reporting-nodes for each lun, move volumes then remove-reporting-nodes for each lun.  This is very time consuming with alot of fc host and luns.  Is there a simpler way of migrating volumes containing luns to new nodes within the same SVM.  Can we migrate volumes then shutdown each fc lif and restart on new nodes avoiding rezoning?  Any help is greatly appreciated.

2 REPLIES 2

TMACMD
2,519 Views

You may want to pull in you local NetApp resource to help out.

It may not hurt to open a Case with support either!

 

If you followed best practices when setting up your LUNS, you should have at least two BLOCK LIFs on each of your 4 controllers.

You can go into advanced mode.

Shut down one LIF on node 1, modify the LIF port to be on node 5. Bring the LIF back up.

Shut down one LIF on node 2, modify the LIF port to be on node 6.Bring the LIF back up.

Shut down one LIF on node 3, modify the LIF port to be on node 7.Bring the LIF back up.

Shut down one LIF on node 4, modify the LIF port to be on node 8.Bring the LIF back up.

 

Now you should have one active LIF on every node. You may also have to modify your LUNs to expose all paths (reporting nodes). Normally, SLM (Selective LUN Mapping) is on and only advertises the node where the LUN is plus the HA partner paths. I suspect you can also modify to only advertise the paths you would like (like 1+2 and 5+6 in my example).

Verify the client sees the paths.

After that, move LUNs from node 1 to node 5. You should see the active path flip from 1 to 5.

Repeat for LUNs (1->5, 2->6, 3->7, 4->8)

Verify clients still see the paths.

Now:

Shut down the second LIF on node 1, modify the LIF port to be on node 5. Bring the LIF back up.

Shut down the second LIF on node 2, modify the LIF port to be on node 6.Bring the LIF back up.

Shut down the second LIF on node 3, modify the LIF port to be on node 7.Bring the LIF back up.

Shut down the second LIF on node 4, modify the LIF port to be on node 8.Bring the LIF back up.

 

Modify the LUNs  reporting nodes as needed

 

Verify clients still see all paths.

 

I have not fully detailed steps here on purpose. I am expecting you to do at least a little bit of searching through docs.netapp.com and support.netapp.com to better understand the steps, like looking up reporting-nodes.

 

Here are a few links to get started.

 

https://mysupport.netapp.com/site/article?lang=en&page=/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/How_Selective_LUN_Mapping_(SLM)_work...

https://docs.netapp.com/us-en/ontap/san-admin/move-san-lifs-task.html

https://docs.netapp.com/us-en/ontap/san-admin/selective-lun-map-concept.html

 

curmmit
2,420 Views

Thanks for the info. We're running PowerVM VIOS 3.1.10 with lpars on AIX 7200-04-02-2016,  Oracle 19c RAC, NetApp Host Utilities 6.0 and Snapdrive UNIX (AIX) 5.3p3.  This discussion is related to SAN (FC) where we have 8 target lifs  (4 per node) from an HA pair per svm.  We do SAN boot so we have lifs that are members of port sets too.

 

Vserver Interface Admin/Oper Addre
----------- ---------- ---------- -----
fas1aix-svm
fas1aix-svm_admin_lif up/up
fas1aix-svm_fc01 up/up 20:6
fas1aix-svm_fc02 up/up 20:6
fas1aix-svm_fc03 up/up 20:6
fas1aix-svm_fc04 up/up 20:6
fas1aix-svm_fc05 up/up 20:6
fas1aix-svm_fc06 up/up 20:6
fas1aix-svm_fc07 up/up 20:6
fas1aix-svm_fc08 up/up 20:6

 

ONTAP Path: fas1aix-svm:/vol/hnam_nonproddb1_asmtrn_data_02/asmtrn_data_lun2
LUN: 24
LUN Size: 200g
Host Device: hdisk931
Mode: C
Multipath Provider: AIX Native
Multipathing Algorithm: round_robin
--------- ----------- ------ ------- ---------------------------------------------- ----------
host vserver AIX AIX MPIO
path path MPIO host vserver path
state type path adapter LIF priority
--------- ----------- ------ ------- ---------------------------------------------- ----------
up primary path0 fcs0 fas1aix-svm_fc06 1
up primary path1 fcs0 fas1aix-svm_fc08 1
up secondary path2 fcs0 fas1aix-svm_fc02 1
up secondary path3 fcs0 fas1aix-svm_fc04 1
up primary path4 fcs1 fas1aix-svm_fc05 1
up primary path5 fcs1 fas1aix-svm_fc07 1
up secondary path6 fcs1 fas1aix-svm_fc01 1

 

I have a dirty little script that will interrogate storage at the client level (AIX, see below) and create a command file I can just copy paste into cli.  So...add-reporting-nodes, vol moves, removing reporting remote nodes.... I already understand and have done before.

 

`snapdrive storage list -devices | grep dev | awk -F ":" '{print $2}' | tr -d ' -' | sort -u`
`sanlun lun show | awk -F " " '{print $1}' | grep -v controller | grep -v vserver | tr -d ' ' | head -n 2 | tail -1`
`snapdrive storage list -devices | grep "dev/hdisk" | awk -F "/" '{print $5}' | tr -d ' ' | sort -u`

 

My concern here is Oracle and RAC since our setup has Oracle owning disks and handling disk concurrency.  Since disks are locked any pathing issues means we have to shutdown ASM and unlock disks to update paths if needed.  Since disks are accessed concurrently you can see our worry if clients do not see all paths as expected.

 

hdisk4 00fa5744da112032 oraclevg active
hdisk700 00fa5744da38d49a GRID_DG locked
hdisk701 00fa5744da38d5d6 GRID_DG locked
hdisk702 00fa5744da38d710 GRID_DG locked

 

again thanks for the info and confirmation. 

Public