2017-01-12 01:34 AM
Adding to aborzenkov coment,
Following are the standard cluster mode configuration.
1) Single node Cluster
2) 2 nodes (switched or switchless cluster)
3) Multiple Nodes in HA Pair ie. at present you can go upto 24 nodes (12 HA Pairs) for NAS and 8 nodes (4 HA Pairs) for SAN Environments.
You should ONLY add HA Pairs to an existing cluster which is greater than 2 nodes as it would help in Takeover and Giveback of the storage if one of the controller in the HA pair fails or goes down for some reason.
In your case, the existing 2 nodes will be in a HA pair (Generally). If node 1's (controller) fails the node 1's storage can be accessed via node 2 (Controller) via MPHA and vice versa.
If you add a singe 3rd node, in a situation where the node 3 (controller) goes down, the node 3's storage will not be accessable as it doesnot have a HA partner leading to outage.
What if the config was reverses and you have a single node cluster that is serving DR ops?
What is the best practice to get the single node to be part of a new 2 node cluster in a different chassis?
Build the 2 node cluster on the new chassis then add the single to it?
Spin up one node of the new 2 node chassis to join the single nodes cluster?
End game is to retire the single node chassis allowing all the shelves/disks ownership, ifgrps, lifs, sm relationships to move to the 2 node chassis as an HA Pair.
You can do a (disruptive) head upgrade from the single node to one of the new systems configured as single node. Then, when the cluster is running on the new controller, add the second node (HA partner) to it to make it an HA pair.
Or, find a (temporary) second controller of the same model as the single node you have (you can borrow them through your Partner from NetApp, it's called Swing Gear), temporarily make a 2-node cluster out of your single system, and then use the regular way to upgrade your cluster hardware and remove the old controllers