We have just configured a new cluster and are about to go to production. I ran Config Advisor and it shows that several interfaces have Flow Control turned on. When I ran the network port show command against them, there are two settings: Flow Control Administrative, which is set to Off, and Flow Control Operational, which is set to On. From reading NetApp documentation, Administrative sets the preferred value and Operational simply reports what the actual value is. Does anyone have ideas regarding what this indicates? Why is flow control on when we set it to off?
I can only speak for my 7-mode experience
we turn flowcontrol off in the rc file at the physical interface level
For 10g, it's recommended to set flowcontrol to none
ifconfig e0a flowcontrol none
Thanks to both of you! Yeah the RC file doesn't exist in cluster mode, but the same change can be made at the command line. The issue is - we made the change, and it shows as being turned off, but it also shows that FUNCTIONALLY flow control is still on. As for restarting ports, we have actually performed a full failover/giveback and the settings are staying the same.
We just checked with our network team and they indicate flow control is off on their end.
One possibility: the interface groups show "Flow Control Administrative" as on, and it can't be turned off while the interface groups are in use. However, I've been told that you can't change flow control settings for interface groups at all so it isn't relevant. The underlying physical ports are the ones that show "Flow Control Operational" as on, even though "Flow Control Administrative" shows it as off. Any other ideas?
We ran into the same issue with ports that were members of an ifgrp. The administrative flow control setting was "none", but operational was "full" (even after restarting the nodes). The solution was to remove one of the ports from the ifgrp, disable flow control, re-add it to the ifgrp, and then do the same on the other port(s) in the ifgrp. The entire process should be non-disruptive (assuming you hvae more than one functional port in your ifgrps). After that the ports reflected flow control as "none" for both administrative and operational.
I perform cDOT consulting for NetApp. This is a great question I hope I can clear it up for you.
Flowcontrol-admin is as you mentioned, how flow control is configured on the STORAGE NODE. Flowcontrol-oper is the operational state of flowcontrol of the port as dictated by the SWITCH PORT configuration and the Storage Node configuration.
Thus - if you have disabled flowcontrol on the storage node, but flowcontrol-oper still says FULL, then that means the Network Switch needs to be updated to have flowcontrol fully disabled. Aka NONE.
Command line examples:
psclus::> network port show -speed-oper 10000 -fields speed-oper,flowcontrol-admin,flowcontrol-oper node port speed-oper flowcontrol-admin flowcontrol-oper --------- ---- ---------- ----------------- ---------------- psclus-01 e1a 10000 none none psclus-01 e1b 10000 none none psclus-02 e1a 10000 none none psclus-02 e1b 10000 none none 4 entries were displayed. psclus::> network port show -role cluster -fields flowcontrol-admin,flowcontrol-oper node port flowcontrol-admin flowcontrol-oper --------- ---- ----------------- ---------------- psclus-01 e1a none none psclus-01 e1b none none psclus-02 e1a none none psclus-02 e1b none none 4 entries were displayed. psclus::> network device-discovery show -node psclus-01 Local Discovered Node Port Device Interface Platform ----------- ------ ------------------------- ---------------- ---------------- psclus-01 e0M psclus-02 e0M FAS3140 e0a phx-5k(SSI132908XB) Ethernet120/1/31 N5K-C5010P-BF e0b phx-5k(SSI132908XB) Ethernet120/1/33 N5K-C5010P-BF e1a psclus-sw1 0/1 CN1610 e1b psclus-sw2 0/1 CN1610 5 entries were displayed. psclus::>
Hope this answers the question. If so - hit the Kudos button and mark it as Answered 😃
That's what I'd thought, but our network guys insist it is off on the network end of things. I reviewed settings with them and they appeared to be correct.
In our case, we had the same issue. Config Advisor flagged some ports (e0b and e0d) as having flow control enabled and we couldn't get the operational state to reflect the administrative state (none). We checked the Nexus switches they were connected to and there were no flow control settings configured on the interfaces (default is disabled). We then noticed that the individual ports having issues were members of an igrp. We did the steps above to remove each member port one at a time and reconfigure them, after that e0b and e0d on the nodes showed the correct administrative and operational flowcontrol setting (none).
Are your ports that are having issues members of an ifgrp? As hadrian mentioned "Ignore the flowcontrol display for an ifgrp - it is all about the member ports."
Thanks Eric! Actually the ports in question are all members of interface groups. The admin setting for each port is set to none, but the operational setting for four of them is set to full. If I am hearing correctly, even though the admin setting is set to none for each port, in these few cases I need to pull the port out of the ifgrp, set it to flowcontrol=none again, and then re-add it. Is that correct?
Correct, that's the only way we could get the operational setting to reflect "none". As long as the ifgrp is setup correctly it should be non-disruptive. However, if it's a critical system you may just want to schedule it in a maintenance window.