I'm in the process of upgrading my two-node cluster, running ONTAP 9.6P5. I'm doing this by adding another two nodes to it, moving all the data and LIFs to that node pair, then decomming the old nodes. I've moved all volumes and data LIFs and everything's been working great...until now.
I'm at the point where I need to migrate the cluster mgmt. LIF to the new node pair. However, it won't work. This is the behavior:
- I do "network interface migrate ..." to either of the new nodes
- Command completes OK
- I can still ping the LIF and telnet to it on ports 80 & 443
- I can no longer reach the System Manager website
If I migrate the LIF back to either of the old nodes, access to the website is restored.
The cluster LIF is using the e0M ports and if I do "network interface failover-groups show ...", all four nodes' e0M ports are listed as failover targets in the same broadcast domain.
- Before migrating the LIF to one of the new nodes, I can access System Manager from both of the old nodes (using the node hostname or IP) but I cannot access it from either of the new nodes
- After migrating it to the new nodes, I can still access it from both of the old nodes (using the hostname or IP) but I can't access it from either of the new nodes and I can't access it via the cluster VIP hostname or IP
So the bottom line, I think, is that for whatever reason, the System Manager isn't accessible at all from the new nodes and I don't know why.
"Where is the host/client - on the same subnet as VLAN 198 or not?"
"If you can ping/telnet/ssh from that localhost, it would seem to point to a potential firewall issue."
There aren't any firewalls in the path between my computer and the NetApp.
"What exactly is the browser response? Not found - 404 or something else?"
Depends on the browser:
IE: "This page can’t be displayed"
Edge: "Hmm, we can't reach this page"
The output from Fiddler is more useful:
"HTTPS handshake to 10.25.1.80 (for #7) failed. System.IO.IOException Authentication failed because the remote party has closed the transport stream."
Note that this isn't solely to do with the floating cluster mgmt. interface. I can't connect to System Manager using the mgmt. IP addresses for the new nodes either, whereas I can using those of the old nodes.
So the overall problem seems to be that the new nodes simply aren't permitting any HTTPS connections.
network interface service-policy network interface service-policy show - Display existing service policies
network interface service-policy add-service - Add an additional service entry to an existing service policy (advanced) network interface service-policy clone - Clone an existing network service policy (advanced) network interface service-policy create - Create a new service policy (advanced) network interface service-policy delete - Delete an existing service policy (advanced) network interface service-policy modify-service - Modify a service entry in an existing service policy (advanced) network interface service-policy remove-service - Remove a service entry from an existing service policy (advanced) network interface service-policy rename - Rename an existing network service policy (advanced) network interface service-policy restore-defaults - Restore default settings to a service policy (advanced)
all e0M connected to the same switch,no Vlan, no firewall between, I've even connected my laptop directly to the same switch but still the same issues. Try to reboot again the new added nodes but no changes.
it also happens the same if I try to reach sysmgr through the mgmt node LIFs, definitely must be a bug somewhere..