We're in the process of Migrating a FAS2240 to run ontap 9.1 that has onboard 2 x SFP+ (e1a, e1b) and 4 x 1Gb (e0a - e0d), We Would Like to know if it is possible to run it with the interconnects partly on 1gb so we have redundant links but also still have some 10gb available for data can the belowport layout work ?
I understand this might not be a best practise and not supported by netapp, I just like to now if it will run the clusterinterconnects on 1 gb in case of the 10gb failure.
I've previously run a test FAS2240 with 9.0 with only a single 1GbE link used for cluster networking - but it was for testing only. So I can say it works, but it's not supported or recommended.
The major concerns are that a 1Gb cluster link with a 10Gb uplink may have significant issues with indirect access (ie, LIF on node 1, aggregate/data on node 2). I don't believe there is any link weighting possible with cluster networking - so using mixed speeds would not address this adequately, and switchless clusters expect only one or two links, so a third link would likely not work.
Our supported/recommended option for this platform is to run it with a single 10GbE link for Cluster traffic (e1b).
It is important to create a SAN object such as a LUN, LIF, or portset when using a single-cluster interconnect on a two-node switchless cluster of FAS22xx or FAS25xx storage systems, even if there is no intention of using the SAN object. The reason is the cluster will continue to serve data if the single-cluster interconnect path is broken.
If at least one SAN object is configured (LUN, portset, etc), then the Cluster Liveliness Availability Monitor (CLAM) induces a takeover by one of the nodes after the cluster interconnect timeout. The taking over node must be otherwise healthy and capable of performing the takeover. The node which took over is now in a "quorum of one" as for any other takeover, and full data services, NAS and SAN, are resumed on this node. After fixing the issue with the cluster interconnect, the administrator can perform a giveback to resume normal two-node operation.
If the cluster does not have any SAN objects, then no automatic takeover occurs -- both nodes remain out of quorum and neither will serve data. Manual intervention will be required to either restore the cluster interconnect or perform a takeover.