ONTAP Discussions

FAS8300 cluster convert form switched to switchless

ChiaFu
5,058 Views

Is it possible to convert cluster form switched to switchless on FAS8300.

I am using FAS8300 switched for volumes move.

After data migration, I need to convert switchless mode.

I don't know how to process cluster network 10GbE *4  to 100GbE *2.

Did any one have experience ?

 

From:

IMG_1.jpg

To:IMG_2.jpg

1 ACCEPTED SOLUTION

ChiaFu
4,710 Views

After open case , case owner solution as below:

 

  •  The Main concern here is to make sure your Sitelist and Network Interface show outputs match.
  •  If they do not, you could experience a cluster communication issue during a reboot.
  •  The Sitelist is used during the ONTAP boot up to establish communications with the cluster. 

1. Enable diagnostic

cluster::> set -privilege diagnostic
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

 

2. Review Sitelist for each node. Identify which IPs are configured to be used as site ClusIP1 and ClusIP2

cluster::*> debug sitelist show
Node          Site ID    ClusIp1           ClusIp2          Epsilon
Cluster-N1     1001       169.254.200.101  169.254.200.104   false
Cluster-N2     1000       169.254.200.105  169.254.200.108   false
2 entries were displayed.

 

3. Review current Network Interface show

  1.  Based on the Sitelist above we would remove LIFs not Listed (169.254.200.102, 169.254.200.103, 169.254.200.106, 169.254.200.107)

Cluster ::> network interface show -role Cluster
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
            ClusterN1_clus1
                         up/up    169.254.200.101/16 ClusterN1     e0a     true
            ClusterN1_clus2
                         up/up    169.254.200.102/16 ClusterN1     e0b     true
            ClusterN1_clus3
                         up/up    169.254.200.103/16 ClusterN1     e0a     true
            ClusterN1_clus4
                         up/up    169.254.200.104/16 ClusterN1     e0b     true

            ClusterN2_clus1
                         up/up    169.254.200.105/16 ClusterN2     e0a     true
            ClusterN2_clus2
                         up/up    169.254.200.106/16 ClusterN2     e0b     true
            ClusterN2_clus3
                         up/up    169.254.200.107/16 ClusterN2     e0a     true
            ClusterN2_clus4
                         up/up    169.254.200.108/16 ClusterN2     e0b     true

 

4. Verify Cluster communication before removing LIFs.

  1.  Run the command for each node
  2.  if communication fails , verify connectivity to the cluster switches or Contact HW L2 in chat for assistance

::*> cluster ping-cluster -use-sitelist true -node ClusterN1
Host is ClusterN1
Getting addresses from sitelist...
Local = 169.254.200.101 169.254.200.104
Remote = 169.254.200.105 169.254.200.108
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.200.101 to Remote 169.254.200.105
    Local 169.254.200.101 to Remote 169.254.200.108
    Local 169.254.200.104 to Remote 169.254.200.105
    Local 169.254.200.104 to Remote 169.254.200.108
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)

 

5. Delete the cluster LIFs which are not listed  in the Cluster sitelist:

cluster::*> net interface delete -vserver Cluster -lif ClusterN1_clus2
cluster::*> set admin

 

6. Continue deleting each cluster LIF as needed

 

  • Verify Cluster config is Healthy
  1. Review Sitelist for each node. confirm ClusIP1 and ClusIP2 with Network Interface show

cluster::*> debug sitelist show
Node          Site ID    ClusIp1           ClusIp2          Epsilon
Cluster-N1     1001       169.254.200.101  169.254.200.104   false
Cluster-N2     1000       169.254.200.105  169.254.200.108   false
2 entries were displayed.

     

     2. Review current Network Interface show

Cluster ::> network interface show -role Cluster
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
            ClusterN1_clus1
                         up/up    169.254.200.101/16 ClusterN1     e0a     true
            ClusterN1_clus4
                         up/up    169.254.200.104/16 ClusterN1     e0b     true

            ClusterN2_clus1
                         up/up    169.254.200.105/16 ClusterN2     e0a     true
            ClusterN2_clus4
                         up/up    169.254.200.108/16 ClusterN2     e0b     true

 

     3. Verify Cluster communication

               a. Run the command for each node

               b. if communication fails , verify connectivity to the cluster switches or Contact HW L2 in chat for assistance

::*> cluster ping-cluster -use-sitelist true -node ClusterN1
Host is ClusterN1
Getting addresses from sitelist...
Local = 169.254.200.101 169.254.200.104
Remote = 169.254.200.105 169.254.200.108
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.200.101 to Remote 169.254.200.105
    Local 169.254.200.101 to Remote 169.254.200.108
    Local 169.254.200.104 to Remote 169.254.200.105
    Local 169.254.200.104 to Remote 169.254.200.108
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)

 

  • Optional LIF modification
    •  In the example above the cluster LIF names are clus1 and clus4
    •  At this point if the cluster communication is stable then you may modify the LIF name and update the IP (optional)

Example

::*> network interface rename -vserver Cluster -lif ClusterN2_clus4 -newname ClusterN2_clus2

 

View solution in original post

9 REPLIES 9

Geo_19
5,028 Views

**** These steps must be on-site cause it requires cable to the platform****


But first, you need to check if the switchless option in the filer is enabled, using the following,

 

::>set adv [jump into the adv mode]

::*> [indicates we're on adv mode]

::*>network options detect-switchless-cluster show [should be displaying true, if not just modify the show instead and type network options detect-switchless-cluster modify -enabled true
::*>net port show, check the e0c and e0d [used as cluster ports], are healthy

::*>net int show [check all clusters lifs are on their home ports]

::*>system autosupport invoke -type all -node * -message MAINT=4h [if the system is under support, with this command we will suppress case creation during 4 hours while you are working on the convertion]

::*>net int migrate -vserver Cluster -lif [ both e0c lifs from both nodes, just on at a time]-destination-node [node 1 or node 2, depends on what you choose before in the lif migration section] -destination-port[e0c,e0d, it varies depending on which lif you select before if you select lif 1 resides on e0c, migrate to e0d node 2]

 make sure all traffic from whatever node you select is running on the destination node and everything is working as expected

 

Example;

 

We migrated both e0c lif to the e0d ports

 

::*>net int migrate -vserver Cluster -lif cluster-lif1 -destination-node node 1 -destination-port e0d
::*>net int migrate -vserver Cluster -lif cluster-lif2 -destination-node node 2 -destination-port e0d


::> net int show [should displaying false on the lifs recently migrated]

something like this;

 

::*> network interface show -vserver Cluster -fields curr-port,is-home vserver lif curr-port is-home - Cluster

node1_clus1 e0b false

Cluster node1_clus2 e0b true

Cluster node2_clus1 e0b false

Cluster node2_clus2 e0b true

4 entries were displayed.

 

then you grab the QSFP28 cable [100Gb] and plug depends on the migration you did before, if you migrate all lifs from e0c, just cable directly to each other, [because the traffic that was there now is running on e0d port ]

 

then after cable, the QSPF28 cable on the e0c port or e0d depending on your selection before, we check the port reachability is up and running, which the following command, 

 

::*> net port show [should be displaying the following]

 

the cluster ports recently connected to each other should be displaying as up and 9000 jumbo, the after checking the port health is good, we migrate the lifs migrated to the home ports [phisically connected e0c to e0c], after that, you check the health of the lif with the command ::*> net int show should displaying as home, then after you do the same with the remaining lifs migrate to the ports physically connected each other and do the same procedure, then after migrating and checking everything is working as expected, you should be good

 

Let's check the doc probably can clear some of the doubts 

https://library.netapp.com/ecm/ecm_download_file/ECMP1157168

EWILTS_SAS
5,008 Views

The document link quoted above is solid - we've executed this a couple of times to go from switched to switchless.  It's a live migration that does not require downtime - you'll be doing one path at a time so you'll lose redundancy during the window.

ChiaFu
5,000 Views

There are 4 links for switched.

The lif migrate form e2a -> e0c , e2b -> e0d.

What about lif on e2c & e2d?

Should I delete the *_clus3 & *_clus4 link?

 

FAS8300::> net int show

  (network interface show)

            Logical    Status     Network            Current       Current Is

Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home

----------- ---------- ---------- ------------------ ------------- ------- ----

Cluster

            FAS8300-01_clus1 

                         up/up    169.254.233.125/16 FAS2750-01    e2a     true

            FAS8300-01_clus2 

                         up/up    169.254.163.128/16 FAS2750-01    e2b     true

            FAS8300-01_clus3 

                         up/up    169.254.133.27/16  FAS2750-01    e2c     true                                                                 

            FAS8300-01_clus4

                         up/up    169.254.81.226/16  FAS2750-01    e2d     true

            FAS8300-02_clus1 

                         up/up    169.254.30.100/16  FAS2750-02    e2a     true

            FAS8300-02_clus2 

                         up/up    169.254.196.223/16 FAS2750-02    e2b     true                                                                              

            FAS8300-02_clus3 

                         up/up    169.254.225.120/16 FAS2750-02    e2c     true                                                                   

            FAS8300-02_clus4 

                         up/up    169.254.200.141/16 FAS2750-02    e2d     true

Geo_19
4,967 Views

Quick question is this a for-node cluster? looks like it is

ChiaFu
4,953 Views

It's FAS8300 two node.See first picture. 10GbE * 4 to cluster network switch(CN1610).

TMACMD
4,997 Views

In this case you NEED to open a support case. There are diagnostic commands that should be run to verify. The details are locked into a support only kb article that cannot be shared. 

contact support to be 100% sure you delete the correct cluster lifs 

ChiaFu
4,711 Views

After open case , case owner solution as below:

 

  •  The Main concern here is to make sure your Sitelist and Network Interface show outputs match.
  •  If they do not, you could experience a cluster communication issue during a reboot.
  •  The Sitelist is used during the ONTAP boot up to establish communications with the cluster. 

1. Enable diagnostic

cluster::> set -privilege diagnostic
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y

 

2. Review Sitelist for each node. Identify which IPs are configured to be used as site ClusIP1 and ClusIP2

cluster::*> debug sitelist show
Node          Site ID    ClusIp1           ClusIp2          Epsilon
Cluster-N1     1001       169.254.200.101  169.254.200.104   false
Cluster-N2     1000       169.254.200.105  169.254.200.108   false
2 entries were displayed.

 

3. Review current Network Interface show

  1.  Based on the Sitelist above we would remove LIFs not Listed (169.254.200.102, 169.254.200.103, 169.254.200.106, 169.254.200.107)

Cluster ::> network interface show -role Cluster
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
            ClusterN1_clus1
                         up/up    169.254.200.101/16 ClusterN1     e0a     true
            ClusterN1_clus2
                         up/up    169.254.200.102/16 ClusterN1     e0b     true
            ClusterN1_clus3
                         up/up    169.254.200.103/16 ClusterN1     e0a     true
            ClusterN1_clus4
                         up/up    169.254.200.104/16 ClusterN1     e0b     true

            ClusterN2_clus1
                         up/up    169.254.200.105/16 ClusterN2     e0a     true
            ClusterN2_clus2
                         up/up    169.254.200.106/16 ClusterN2     e0b     true
            ClusterN2_clus3
                         up/up    169.254.200.107/16 ClusterN2     e0a     true
            ClusterN2_clus4
                         up/up    169.254.200.108/16 ClusterN2     e0b     true

 

4. Verify Cluster communication before removing LIFs.

  1.  Run the command for each node
  2.  if communication fails , verify connectivity to the cluster switches or Contact HW L2 in chat for assistance

::*> cluster ping-cluster -use-sitelist true -node ClusterN1
Host is ClusterN1
Getting addresses from sitelist...
Local = 169.254.200.101 169.254.200.104
Remote = 169.254.200.105 169.254.200.108
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.200.101 to Remote 169.254.200.105
    Local 169.254.200.101 to Remote 169.254.200.108
    Local 169.254.200.104 to Remote 169.254.200.105
    Local 169.254.200.104 to Remote 169.254.200.108
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)

 

5. Delete the cluster LIFs which are not listed  in the Cluster sitelist:

cluster::*> net interface delete -vserver Cluster -lif ClusterN1_clus2
cluster::*> set admin

 

6. Continue deleting each cluster LIF as needed

 

  • Verify Cluster config is Healthy
  1. Review Sitelist for each node. confirm ClusIP1 and ClusIP2 with Network Interface show

cluster::*> debug sitelist show
Node          Site ID    ClusIp1           ClusIp2          Epsilon
Cluster-N1     1001       169.254.200.101  169.254.200.104   false
Cluster-N2     1000       169.254.200.105  169.254.200.108   false
2 entries were displayed.

     

     2. Review current Network Interface show

Cluster ::> network interface show -role Cluster
  (network interface show)
            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
Cluster
            ClusterN1_clus1
                         up/up    169.254.200.101/16 ClusterN1     e0a     true
            ClusterN1_clus4
                         up/up    169.254.200.104/16 ClusterN1     e0b     true

            ClusterN2_clus1
                         up/up    169.254.200.105/16 ClusterN2     e0a     true
            ClusterN2_clus4
                         up/up    169.254.200.108/16 ClusterN2     e0b     true

 

     3. Verify Cluster communication

               a. Run the command for each node

               b. if communication fails , verify connectivity to the cluster switches or Contact HW L2 in chat for assistance

::*> cluster ping-cluster -use-sitelist true -node ClusterN1
Host is ClusterN1
Getting addresses from sitelist...
Local = 169.254.200.101 169.254.200.104
Remote = 169.254.200.105 169.254.200.108
Cluster Vserver Id = 4294967293
Ping status:
....
Basic connectivity succeeds on 4 path(s)
Basic connectivity fails on 0 path(s)
................
Detected 9000 byte MTU on 4 path(s):
    Local 169.254.200.101 to Remote 169.254.200.105
    Local 169.254.200.101 to Remote 169.254.200.108
    Local 169.254.200.104 to Remote 169.254.200.105
    Local 169.254.200.104 to Remote 169.254.200.108
Larger than PMTU communication succeeds on 4 path(s)
RPC status:
2 paths up, 0 paths down (tcp check)
2 paths up, 0 paths down (udp check)

 

  • Optional LIF modification
    •  In the example above the cluster LIF names are clus1 and clus4
    •  At this point if the cluster communication is stable then you may modify the LIF name and update the IP (optional)

Example

::*> network interface rename -vserver Cluster -lif ClusterN2_clus4 -newname ClusterN2_clus2

 

brad_k
3,400 Views

Can you please post the all the step including the lif migration and lif deletion?

TMACMD
3,014 Views

Just a quick note on renaming a cluster lif

 

The command above will not do what you expect

 

network interface rename -vserver Cluster -lif ClusterN2_clus4 -newname ClusterN2_clus2

 

 that will result in a lif called: ClusterN2_ClusterN2_clus2

 

 the way to rename a cluster lif:

network interface rename -vserver Cluster -lif ClusterN2_clus4 -newname clus2

 

 ONTAP always prepends the node name to the “new name”

 

Public