ONTAP Hardware

Can I have more than 2 switches on cluster interconnect?


Hi all,

I cannot find documentation on this, so I assume what I'm about to ask is not supported, but maybe someone can help clarify.

Regarding cluster network on clustered data ontap, do you know if it will be possible to have more than 2 CN1610 switches? My objective is to have nodes of the same cluster installed into different rooms, so I'm guessing if it can be possible to extend cluster interconnect that way.

I see that if you need more than 12 nodes, the only options is to replace switches, you cannot "stack" more than 2.

Thank you all




Hi again,

I have few more question regarding switch interconnect.

1) Is there any limitation for ISL cables lenght? Or distance between switches?
2) Can I use 10Gbit SFP+ and multi mode fibre cables for ISL connection?
3) Do I have, for the time needed for migration, 2 ISL physical connection instead of 4?

Thanks a lot




to answer your questions:


1) theoretically the lenght limitation is given by the cable used (e.g. 100m for CAT5 copper cables)

2) you can use 10Gb/s and multi-mode fiber for ISL (it's the only optical connection we allow for that switch AFAIK), you can confirm which SFP and cables part numbers are needed looking at Hardware Universe.

     We officially support only 30m fibre cables when using optical connection (as can be seen in HW Universe).

3) the cluster should theoretically work even with only 1 of the 4 links on the LACP ISL interface, however this might result in extremely poor cluster performance. It is impossible for us to know if removing some links from the ISL will impact the performance of your current setup


Feel free to reply if you have more questions.





just to share our experience, we are going with 2 ISL across different rooms (we're unable to use 4 because of lack of connections between them).
Cluster is ok, doing vol move without problems.

Thank you all for help.



To add to @maffo's great response - I've seen people stretch clusters between rooms and across campuses - key is to have one switch per room and lots of fibre runs between them. If you're looking for true geo-failover, our Metrocluster solution allows up to 300km between sites - https://www.netapp.com/us/products/backup-recovery/metrocluster-bcdr.aspx


Hope this helps!


Hi @maffo and @AlexDawson, thanks for your replies

I was looking to a solution to "non disruptively" migrate HA pairs to a new room, using some spare HA pairs to move data.
Unfortunately, I haven't many links between the rooms, so I'll try to sort out 😉

Thanks again





unfortunately it is not possible to have more than 2 switches in the Cluster Interconnect at the moment.

If your cluster needs to grow beyond 12 nodes, you will need newer switches where to connect the nodes.

Please be also aware that Cluster Interconnect switches cannot be shared across multiple clusters.



NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner