ONTAP Discussions

Multiple cluster-management interfaces

sta
5,690 Views

Hi all,

 

On 2 of cDOT 2-nodes-clusters on that I am supposed to administer, I discovered TWO cluster-mgmt interfaces.

Does it make any sense according to Netapp recommendations ? I was pretty sure that only ONE cluster-mgmt per cluster was allowed.

Is there a reason to worry about side-effects ?

1 ACCEPTED SOLUTION

aborzenkov
5,667 Views

Well, while it is certainly unusual, I do not see anything wrong in having multiple interfaces with cluster management role. Of course, there could be just one routing table for admin SVM, so you really can only access one of them from different subnet.

View solution in original post

9 REPLIES 9

aborzenkov
5,687 Views

Please paste output of "network interface show".

JGPSHNTAP
5,685 Views

^^
What he said

 

Also, what version are you running..

 

 

sta
5,672 Views

And the exact version is NetApp Release 8.2.1P2 Cluster-Mode: Sat Jun 14 04:10:39 PDT 2014

aborzenkov
5,668 Views

Well, while it is certainly unusual, I do not see anything wrong in having multiple interfaces with cluster management role. Of course, there could be just one routing table for admin SVM, so you really can only access one of them from different subnet.

JGPSHNTAP
5,665 Views

Wow, so much for simplicity.....

 

I assume you are talking about the top items

sta
5,641 Views

Yeah, , network policies seem really simple. 😉

 

And yes, the cluster-mgmt interfaces are the top two items of the list.

JGPSHNTAP
5,639 Views

Ya, you got a lot going on...

sta
5,673 Views

Hi,

 

Below, the output of "network interface show" for one of the 2 clusters.

Among side-effects, I already see one with OnCommand Unified Manager: even configured with the first cluster-mgmt address,after some time it takes into account the second (unreachable) address.

 

            Logical    Status     Network            Current       Current Is
Vserver     Interface  Admin/Oper Address/Mask       Node          Port    Home
----------- ---------- ---------- ------------------ ------------- ------- ----
clf3220ftv01
            clf3220ftv01_mgmt up/up 172.22.34.40/27  ndf3220ftv01  e0a     true
            clf3220ftv01_mgmt2203 up/up 10.199.203.2/24 ndf3220ftv01 e1b-2203 true
ndf3220ftv01
            intcl_lif1   up/up    172.22.34.39/27    ndf3220ftv01  e0a     true
            ndf3220ftv01_clus1 up/up 169.254.123.87/16 ndf3220ftv01 e1a    true
            ndf3220ftv01_clus2 up/up 169.254.43.247/16 ndf3220ftv01 e2a    true
            ndf3220ftv01_mgt up/up 172.22.34.41/27   ndf3220ftv01  e0M     true
            ndf3220ftv01_snapmirror up/up 10.99.100.120/27 ndf3220ftv01 e2b-413 true
ndf3220ftv02
            intcl_lif2   up/up    172.22.34.49/27    ndf3220ftv02  e0a     true
            ndf3220ftv02_clus1 up/up 169.254.243.218/16 ndf3220ftv02 e1a   true
            ndf3220ftv02_clus2 up/up 169.254.219.118/16 ndf3220ftv02 e2a   true
            ndf3220ftv02_mgt up/up 172.22.34.42/27   ndf3220ftv02  e0M     true
            ndf3220ftv02_snapmirror up/up 10.99.100.121/27 ndf3220ftv02 e1b-413 true

...
<EDIT: removing entries containing sensitive information>

...
57 entries were displayed.

pavila
5,443 Views

From an installation perspective, it depends on the installation engineer as to which ports are used for cluster-mgmt but typically the favored port is e0a on this particular controller instead of e0M. Also, port e0a is  the node-mgmt port as well which is fine. I've also seen customers try and use an ifgrp (e0a,e0b) for cluster-mgmt  but there's no reason to do this either because 'cluster-mgmt' only hosts system manager and SSH traffic and in the event of a port failure, it will move over to a different port anyway. Can you provide the output of the 'net port show' command if possible?

Public