FAS and V-Series Storage Systems Discussions

Any reason not to add additional ports for cluster interconnect?

We got a New FAS2620 and have it all happy and running but one thing I notice on the system manager is that it whines that 4 ports are down. e0e and e0f on both nodes are unused since I figure 20gbps per node is more than plenty for a system running 7200rpm disks (not to mention all I have space for on our switches) and we don't do FC. While the error isn't problematic, I'd rather clear it up if I could. So my thought was to just get two more direct attach cables and make those ports additonal cluster interconnect ports. Cheap, easy, and the error will go away. Crrently there are the two default interconnect ports, e0a and e0b.

 

Any reason that's a bad idea or would cause issues?

4 REPLIES 4

Re: Any reason not to add additional ports for cluster interconnect?

Hi there!

 

Per our hardware universe page (Platforms -> FAS/V Series), only two cluster connections are supported on this platform, so as such the recommendation would be to not connect them as additional cluster ports.

 

I would suggest running this command to disable the ports - "network port modify -node * -port e0e,e0f -up-admin false" and then refreshing OnCommand to see if the error goes away.

 

Hope this helps!

Re: Any reason not to add additional ports for cluster interconnect?

Unfortunately that doesn't work. My research indicates this is a known Ontap issue: https://kb.netapp.com/app/answers/answer_view/a_id/1070568. That's why I was thinking of just hooking up the ports to eachother as additonal cluster interconnects.

Re: Any reason not to add additional ports for cluster interconnect?

This is fixed in OnCommand System Manager 9.4.

https://mysupport.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=956352

Re: Any reason not to add additional ports for cluster interconnect?

Good deal. I'll look at upgrading to that once it goes general availability.

Forums