ONTAP Discussions

Cluster Mode interconnect switches


Hello all, I have a simple question about Cluster Mode interconnect links.

In a cluster with 4 node (2 HA Pair) all connected to 2 CN1610 switches, what happen to the SVMs if I lost both the switches at the same time?


Thank you



If would depend on your data paths.  As long as the clients were using a LIF that exists on the same node as the SVM's volumes (the optimized path you should be using always anyway), the SVM should continue to serve data.  You would lose cluster-wide management of course, as the cluster management LIF would only have connectivity to the node it was running on when the cluster switches failed.  Storage Failover should still be in place as well.  Hopefully everyone has fully redeundant power sources for their cluster switches, but from what I understand data access should not be affected should you have a dual failure like this.


We tested this, within a few mintues of losing both switches the cluster went done, we tested this on 8.2.0, I can't be sure what would happen in newer version. The real question is what do you think the chances are of losing both switches? The MTBF on switches is very high. You can put the switches in two diifferent cabinets to lower this chances of losing both switches.

Thank you for your reply, I'm thinking about a cluster with 1 HA pair in one datacenter and 1 HA pair in another datacenter (less 300mt distance). In this case of course i have to install both switches into one of the datacenter and connect the second HA pair cluster interconnect ports via direct FC cables (within maximum distance permitted). My question was about what happen if lost the datacenter with the switches or the direct FC cables. As I understand could be problems with cluster management but no with SVM serving data. Thank You.


In this example, why don't you install a single switch in each datacenter and use long ISL connections as well as the connections from each node?  That way even if you lost a whole datacenter, the HA pair in the active datacenter would have a single link to the cluster network still?


This is what I initially want to do, but on PartnerEdge configurator, the switch intercconect ISL must be configured with 4 port cabled only with copper cables of 0.5, 2, 3 or 5 meters length.

I suppose for c-dot compliance reasons only.

This mean switches must be in the same datacenter.

Maybe PVR request could be allow other ISL configuration?


Thank you



Hi Everyone,


we have similar situation explained as above.

Node01 & Node02, Intercluster switches 2 available in B04.  Node03 & 04 are in B15.

Now business has decided to decom Node01 & 02 and we have unjoined them 4 node cluster.


Next step will be to move the 2 intercluster switches from B04 to B15.  At present we have only 2mtrs lengthy cables.

My query is, by shutting down the ports 0/13 - 0/16 and offloging the switch2 traffic to switch1, can we move sw2 to B15?

After moving the sw2 to B15, we will connect Node side cables and revert back to home ports.  Due to not having required lengthy cables, we may not be able to restore ISL connection, until sw1 will be moved to B15.


Will it cause any data unavailability?

Keeping Node ports online and shutting down ISL port channel will have any advarse impact to cluster nodes?


You could temporary reconfigure cluster as switchless - it should not require connectivity between different adapter pairs. You can actually simply leave your cluster as switchless now when you have only two nodes, and avoid moving switches completely.


oh, and you must reconfigure your cluster as two-node now, otherwise failover won’t work properly.



Thanks for your reply.


Client want to retain 2 intercluster switches.

Yes, we have unjoined node01 & 02.  Later cluster HA was set to true. 


As I shared earlier, we have to move both switches to target bay 15 and during the ISL short length, we can not keep the ISL connectivity till both will be moved to b15.


The initial plan is:


1)migrate sw2 lifs workload to sw1 lifs of both controllers.

2)turned off sw2 connected node side physical ports

3)turned off 0/13-0/16  port channel  ports on sw2.

4)move sw2 to B15 AND power on after connecting all the cables, except ISL.

5)revert sw2 LIFs back to home port and status need to be verified using Ifstat.

6)later move sw1 lifs to sw2 lifs.

7)turned of sw1 and move it to b15.

8)connect all the cables includeing ISL between sw1 to sw2.  Later turned on sw1.

9)Ensure sw1 owning LIFs are up and running. 

10)Enable ISL port channel between sw1 and sw2.


We are stuck at step-6, after seeing the number of paths availble list is 1 path.  Will it cause any outage if we will move sw1 ownning ports to sw2, due to ISL connectivity is not yet established?

ISL we can connect only after moving both the switches to B15 only.


Well ... you have two possibilities.


1. Use cables between nodes and switches as temporary interconnect. It is enough to ensure connectivity for a short time before you move LIFs to new switch. You will have two unused cables after you move the first switch.


2. Temporary enable switch less cluster mode. It means no ISL will be expected by ONTAP.


 With high probability simply moving LIFs will work as well, but i’d Open support case to be sure.


The cluster will expect a ISL to utilize all paths.

When the paths are insufficent you risk an outage.


Since your plan involves moving BOTH cluster swicthes the best way forward would be to move to a switchless cluster, move switches, and move back to switched cluster.


https://library.netapp.com/ecm/ecm_download_file/ecmp1157168 - Transitioning to a two-node switchless cluster

https://library.netapp.com/ecm/ecm_download_file/ECMP1140535 - Migrating to a two-node switched cluster


- emile