We have just configured a new cluster and are about to go to production. I ran Config Advisor and it shows that several interfaces have Flow Control turned on. When I ran the network port show command against them, there are two settings: Flow Control Administrative, which is set to Off, and Flow Control Operational, which is set to On. From reading NetApp documentation, Administrative sets the preferred value and Operational simply reports what the actual value is. Does anyone have ideas regarding what this indicates? Why is flow control on when we set it to off?
Thanks to both of you! Yeah the RC file doesn't exist in cluster mode, but the same change can be made at the command line. The issue is - we made the change, and it shows as being turned off, but it also shows that FUNCTIONALLY flow control is still on. As for restarting ports, we have actually performed a full failover/giveback and the settings are staying the same.
We just checked with our network team and they indicate flow control is off on their end.
One possibility: the interface groups show "Flow Control Administrative" as on, and it can't be turned off while the interface groups are in use. However, I've been told that you can't change flow control settings for interface groups at all so it isn't relevant. The underlying physical ports are the ones that show "Flow Control Operational" as on, even though "Flow Control Administrative" shows it as off. Any other ideas?
We ran into the same issue with ports that were members of an ifgrp. The administrative flow control setting was "none", but operational was "full" (even after restarting the nodes). The solution was to remove one of the ports from the ifgrp, disable flow control, re-add it to the ifgrp, and then do the same on the other port(s) in the ifgrp. The entire process should be non-disruptive (assuming you hvae more than one functional port in your ifgrps). After that the ports reflected flow control as "none" for both administrative and operational.
I perform cDOT consulting for NetApp. This is a great question I hope I can clear it up for you.
Flowcontrol-admin is as you mentioned, how flow control is configured on the STORAGE NODE. Flowcontrol-oper is the operational state of flowcontrol of the port as dictated by the SWITCH PORT configuration and the Storage Node configuration.
Thus - if you have disabled flowcontrol on the storage node, but flowcontrol-oper still says FULL, then that means the Network Switch needs to be updated to have flowcontrol fully disabled. Aka NONE.
Always use Config Advisor as this gentlemen has done to check your initial setup or whenever you make physical changes to the storage cluster
Always disable flowcontrol on 10G ports - data or cluster - for best performance. Flow control on 1G is also not really necessary anymore with modern switching gear
Config Advisor will alert on flowcontrol being enabled on Cluster Role ports - used for Cluster Interconnect switch connectivity
You can use the command 'network device-discovery' to enable easier conversation with your Network switch team to discuss which ports need to be updated with better flowcontrol settings
Ignore the flowcontrol display for an ifgrp - it is all about the member ports.
In our case, we had the same issue. Config Advisor flagged some ports (e0b and e0d) as having flow control enabled and we couldn't get the operational state to reflect the administrative state (none). We checked the Nexus switches they were connected to and there were no flow control settings configured on the interfaces (default is disabled). We then noticed that the individual ports having issues were members of an igrp. We did the steps above to remove each member port one at a time and reconfigure them, after that e0b and e0d on the nodes showed the correct administrative and operational flowcontrol setting (none).
Are your ports that are having issues members of an ifgrp? As hadrian mentioned "Ignore the flowcontrol display for an ifgrp - it is all about the member ports."
Thanks Eric! Actually the ports in question are all members of interface groups. The admin setting for each port is set to none, but the operational setting for four of them is set to full. If I am hearing correctly, even though the admin setting is set to none for each port, in these few cases I need to pull the port out of the ifgrp, set it to flowcontrol=none again, and then re-add it. Is that correct?
Correct, that's the only way we could get the operational setting to reflect "none". As long as the ifgrp is setup correctly it should be non-disruptive. However, if it's a critical system you may just want to schedule it in a maintenance window.