ONTAP Hardware
ONTAP Hardware
We're in the process of Migrating a FAS2240 to run ontap 9.1 that has onboard 2 x SFP+ (e1a, e1b) and 4 x 1Gb (e0a - e0d), We Would Like to know if it is possible to run it with the interconnects partly on 1gb so we have redundant links but also still have some 10gb available for data can the belowport layout work ?
I understand this might not be a best practise and not supported by netapp, I just like to now if it will run the clusterinterconnects on 1 gb in case of the 10gb failure.
Solved! See The Solution
It is important to create a SAN object such as a LUN, LIF, or portset when using a single-cluster interconnect on a two-node switchless cluster of FAS22xx or FAS25xx storage systems, even if there is no intention of using the SAN object. The reason is the cluster will continue to serve data if the single-cluster interconnect path is broken.
The details:
I've previously run a test FAS2240 with 9.0 with only a single 1GbE link used for cluster networking - but it was for testing only. So I can say it works, but it's not supported or recommended.
The major concerns are that a 1Gb cluster link with a 10Gb uplink may have significant issues with indirect access (ie, LIF on node 1, aggregate/data on node 2). I don't believe there is any link weighting possible with cluster networking - so using mixed speeds would not address this adequately, and switchless clusters expect only one or two links, so a third link would likely not work.
Our supported/recommended option for this platform is to run it with a single 10GbE link for Cluster traffic (e1b).
Thank You for your insight ,
If we decide to run on only single 10gb intercluster link, what will happen with the cluster in case of a link failure ?
It is important to create a SAN object such as a LUN, LIF, or portset when using a single-cluster interconnect on a two-node switchless cluster of FAS22xx or FAS25xx storage systems, even if there is no intention of using the SAN object. The reason is the cluster will continue to serve data if the single-cluster interconnect path is broken.
The details:
I don't believe you need licensed FCP or iSCSI... CLAM itself runs regardless of licensing.
I think the easiest thing to do is just create an empty portset.
cluster1::> portset create -vserver vs1 -portset ps1 -protocol mixed