I have setup a 2 node cluster across 2 VM's - I attempt to SSH into the mgmt interface via PuTTY and I receive "Error: Connection Refused" - I am able to ping the address assigned to mgmt interface. I am using the admin account and as far as I Know the access rights for SSH are enabled by default? is there somthing else I must to do in order to SSH to this?
I have setup a 2 node cluster across 2 VM's - I attempt to SSH into the mgmt interface via PuTTY and I receive below error . I am able to ping the address assigned to mgmt interface & also able to take ssh from node interface address but not from mgnt address. is there somthing else I must to do in order to SSH to this?
You are accessing ViPR. By using this system you consent to the owning organization's terms and conditions. <-- Every Time is saying Access Denied Using keyboard-interactive authentication. Password: Access denied Using keyboard-interactive authentication. Password:
Configuration for cluster : -
Logical Status Network Current Current Is Vserver Interface Admin/Oper Address/Mask Node Port Home ----------- ---------- ---------- ------------------ ------------- ------- ---- Cluster cluster90-01_clus1 up/up 169.254.3.43/16 cluster90-01 e0a true cluster90-01_clus2 up/up 169.254.3.53/16 cluster90-01 e0b true cluster90-02_clus1 up/up 169.254.102.78/16 cluster90-02 e0a true cluster90-02_clus2 up/up 169.254.102.88/16 cluster90-02 e0b true cluster90 cluster90-01_mgmt1 up/up 192.168.32.65/24 cluster90-01 e0c true cluster90-02_mgmt1 up/up 192.168.32.66/24 cluster90-02 e0c true cluster_mgmt up/up 192.168.32.64/24 cluster90-01 e0d true 7 entries were displayed.
I have a similar problem as described before. As setup I have a two node cluster. If the cluster mgmt ip is hosted on the e0M port of the first controller, I am able to ping the ip address but can not login via ssh. I get the rror message "connection refused...". If I migrate the interface the interface to the e0M port of the second controller. I am able to login via ssh?
So the node mgmt ip address on the first node doesnt too. The strange thing is, that I am able to login via ssh to the service processor of the first node, which is the same physical port (e0M).
I have the same problem with a cluster (part of a 2-node metrocluster) running ONTAP 9.3P5 as well. Firewall Policies are the same on both MCC cluster, Mgmt-LIFs have the correct firewall policy (mgmt) attached and the 2nd cluster works as expected.
What I tried so far:
SSH connection from different Clients -> SSH does not work on any client
ping to cluster_mgmt and node_mgmt -> works
https to system manager via cluster-mgmt -> works
ssh to both interface -> doesn't work
migrating the LIFs away from e0M to a VLAN tagged ifgrp -> doesn't help
bringing the interfaces down and up again -> doesn't help
I was only able to track down this issue after going into the systemshell from diag mode and poking around in the log files. It turned out that the underlying user id was wrong and different on the separate nodes. One node had the user id as X and the other node had it as Y. Once I corrected that, ssh worked properly.