expected performance from CIFS (throughput)?

I have a question about CIFS from Netapp filers. When I enable CIFS on my vfilers, I get strange low throughput from CIFS (from clients). Like for instance: copying folder containing 20k small files (for website) from CIFS to local disk (on dedicated windows 2008 r2 server) goes with speed ~1.5MB/s even less (drops to 400k/s). First I thought it's because of small files, but I did a test with the same folder, with 2 physical servers (windows 2008 r2) over unc shares - there was throughput of 15MB/s constantly (so 10x more).

Ok, disks are shared for different things as well (14x SATA disks in aggregate) used for NFS as well, but their load is avg 60% (each disk), so even though it should give much better performance...

I am confused about this speed, and what should be expected in that setup. Someone could say: change to FC disks, but still these 2 physical servers also have sata disks inside, and then copying just performs normally (as it should).

Can anyone point me to right direction of this, or even better - what should be expected from CIFS over Netapp? Maybe 1.5MB with small files is max what Netapp can push?

BTW, I did a test with bigger file as well (2GB) and copying was going 20MB/s (so better, but still didn't max 1Gbit connection)... And the same file (2gb) went between 2 servers with speed 100MB/s

I hope someone can help with that thing...

Re: expected performance from CIFS (throughput)?

Hello there,

I have seen some horrendous performance with CIFS and filers and am currently still trying to nail down an issue where an lacp vif has 15Mb/s to some servers while a multimode works fine, doesnt help load balancing at all. One thing to check (if you can) is change the mode if you are using lacp. SMB2 is disabled by default on filers, turning that on will give you a boost for 2008R2. increasing the cifs tcp window size can increase performance in some circumstances as well.

NetApp have told me in previous support calls regarding CIFS that NetApps implementation of CIFS doesn't match microsoft's and not to expect the same speed, but with SMB2 and a chunky R710 with a decent intel card can pull at 100MB/s (on a good day without lacp turned on, or with if I'm very lucky)

Re: expected performance from CIFS (throughput)?

I just read this post now. Are you still having issue with CIFS performance? If so, please collect perfstat. 1.5MB/s seems too low.



Re: expected performance from CIFS (throughput)?

Have you done any testing to verify that your NIC is configured properly? Depending on what type of NIC you are using (PCI vs onboard) that could be your issue.

Re: expected performance from CIFS (throughput)?

we are having similar issues. the cifs latency is low. but even when the system isn't used heavily we still can't seem to pass the 30MB/s mark.

i sent a perfstat in the past and they said everything looks good.

Re: expected performance from CIFS (throughput)?

which version of ONTAP are you using? I think perfstat would still be useful. Thanks,   -Wei

Re: expected performance from CIFS (throughput)?


i'm discussing with my colleagues on enabling smb2 (we have mixed windows environment) and increat tcp window size(still reading on this).

i'm waiting for a colleage to come back from vacation before i do the next perfstat as he is likely to do a lot of data moves/copy

Re: expected performance from CIFS (throughput)?

Great. These are good steps. There are some parameters in the controller you can tune. Thanks,   -Wei

Re: expected performance from CIFS (throughput)?

i'm doing a bit of digging regarding tcp window size. tells me to cifs.tcp_window_size, to 2096560

but i can't find anything else on the net about that size. however, i find lots of people using 64240

Re: expected performance from CIFS (throughput)?

My colleague Chowdary answered the following questions:

  • What's the recommendation for tuning cifs.tcp_window_size?

[Chowdary ] with SMB2 the size is 2MB (2097152)

  • What are the best practices to get good CIFS throughput?

[Chowdary ]       1. enabling SMB 2 on the controller and using SMB 2 enabled clients would give better performance than SMBv1.

                                2. Make sure there is not much latency between the Domain controllers and the controller

                                3. Need to make sure that no stale DCs are listed under the preferred DCs.