We are having an issue where we have following setup:
- we have 4 nodes cluster with different IPs say X.X.X.1 to X.X.X.4
- we have 1 Cluster management LIF x.x.x.5
Now the issue is that when we unplug the management cable from X.X.X.1 the other node do not take over automatically the cluster management LIF IP (cluster management IP not accessible) and we have to manually shift it to other node. Please guide us what we are missing in this situation.
From the screenshot : It looks perfect and standard. cluster_mgmt is available to failover to ports from all nodes in the failover group (node mgmt & data ports). Failover-group & policy is standard as it should be.
In the screenshot, I see that failover targets presented are in this order: 1) If Cluster_1:e0M is down, we can simulate it.
This is a standard setup issue. The ONLY ports that should be in that list (in other words, based on the output, in the Default Broadcast-Domain) are connected ports on the same physical network.
Different customers have different setups. With that, at a minimum, the broadcast-domain should include e0M from each node. *IF* you have e0c/e0d/e0e/e0f connected and are on the same physical network (whatever networ e0M is on, like 192.168.1.1 - 192.168.1.4) then it will work. If it is not, then it is entirely possible when the port fails (or the plug is pulled) it will go to another port and advertise there (Reverse ARP) that the IP address has moved.
I have seen this event transpire before and the cluster became unavailbel through the cluster_mgmt port.
Please correct your Broadcast-domain(s) and try again.
Typical Broadcast domains seperate things out. For example:
Default (MTU 1500):
NFS (MTU 900):
CIFS (MTU 1500):
Provide more detais if this does not work.
"broadcast-domain show ; ifgrp show"
"net int show -failover"
(but please try to copy/paste if you can instead of a "picture". I know, some places cannot, but if you can, it is easier!