Could someone shed some light on the flow control settings for UTA ports, below is the scenario.
I am working on the configutation of 4 node 3220 custer. We are using this 4 node cluster for FC and NFS traffic. each UTA port on the node used to serve both FcoE and NFS traffic( UTA port logically acting as FC and 10GbE ports) . However, the flow control is set to FULL on the 10GbE logical port to serve the NFS traffic. I think this flow control settings at 10GbE logical port level doesn't effect the FC traffic flow of the UTA port. the flow control settings would be effective on 10GE logical ports only.. Please confirm this...
NetApp recommend to set the flow control to NONE and upper layer protocalls can take care of flow control( as per Best practise). If I want to change the flow control setting of 10GbE logical port, does it effect the data flow of logical FC0E port aswell?
Now the actual issues is we are using the controllers mostly to serve FC trafic and we are experienceing Performance issues on all the nodes. current ONTAP version on the controller is 8.2p5
we are hitting the latency of 50-80ms, I am trying to track the IO on the controllers, IO is not really high when we had the performace issues, But some times disk utilizations and cpu utlization hitting 80%.
Performace manager doesn't have the granularity at aggregate level to track what LUN/volumes are casuing the high utlization at aggregate level..
I am using the coomands sysstat -M, sysstat -x, netstat , statit commands mainly. However, I am not sure on any other internal porcesses are causing the high disk uitlizations...
Please share your ideas on this..
Oh.. another point this.. These controllers are flexpod config.....