2014-03-12 06:23 AM
I have a 3240 with a large number of CIFS shares. We occupy 3 suites within an office building, 2 of the suites can access the CIFS shares at normal gigabit speeds ~80MB/s transfer rates. Our newest suite is having issues, they only get 100KB-10MB/s transfers to the same files and shares. I tested access to Windows servers and those are all running at normal 80MB/s speeds. The network is the same in all suites, suite 1 is where the data center is with local users accessing files from a Cisco 2960S switch connected via 10Gb link to a Nexus 7010, then 10Gb into the NetApp, speeds here are normal. Suite 2 has a Cisco 3560G switch connected via 1Gb link to the Nexus 7010, then 10Gb to the NetApp, speeds here are normal. Suite 3 is the new suite, it has a Cisco 2960S switch connected via 10Gb link to the Nexus 7010, speeds here are slow, but in this suite I can access a Windows file share at 80MB/s but not to the NetApp CIFS shares.
I have upgraded/downgraded the code on the new suite 3 switch, doesn't help, I have enabled/disabled SMB2 on the NetApp, doesn't help. I have verified the QoS settings, switch uplink configs, switch port configs, all match the other switches within the network.
Can anyone offer other troubleshooting steps I could try to resolve this issue??
Solved! SEE THE SOLUTION
2014-03-18 02:46 AM
We are experiencing some issues with Cisco Nexus 5020 and Fabric Extenders we have very slow performance when we copy files from our NetApp clusters, but the other way around it's wire speed.
I did a simple powershell script to time a copy job just to pinpoint where we had the issue. http://vikingtown.se/storage-scripting/?p=32
We don't have the same switch setup as you have but it seems like we're running out of buffers in our Fabric extenders. We saw packet drops on the interface connected to the server but not on the uplink to the Nexus.
If you have drops on the uplinks the solution would be to add more bandwidth.
To solve our problem the solution seems to be to configure the FEX to share the buffers "no hardware N2148 queue-limit" but it's not verified from Cisco yet.
2014-03-18 04:57 AM
Whenever I see a different speed between a windows server and a NetApp filer I first check the tcp windows size for CIFS on the NetApp. The default value for the NetApp is only 17520 and for the windows servers and windows 7 clients 2096560. Note that changing this value requires a disruptive restart of CIFS.
2014-03-18 08:14 AM
Since no one was posting on this I figured it out myself. Turns out the QoS settings on the 10G uplink of the new 2960 was the problem. I turned that off and everything started running the way it is supposed to. The werid thing is that it is enabled on the other switch that is also working fine. Either way, my users are happy and running at 10Gb, it was the QoS.
Thanks to those that posted.
2014-04-04 08:46 AM
I am facing this exact same issue. Always with Cisco 2960 and NetApp Filers.
What if we need QoS? Did you manage to understand why is this only happening with CIFS from NetApp? Have you tried any customized QoS policy that worked?
2015-05-28 09:26 PM
Hi Pedro did you ever workout what was casing the performance issue to CIFS shares between you client PCs, the your Cisco 2960 switches and your Netapp?
I have just installed a FAS2554 and have 2960 access layer switches which have QOS for VIOP and and what do you know CIFS traffic to the netapp is slower than a snail.