ONTAP Discussions

expected performance from CIFS (throughput)?

I have a question about CIFS from Netapp filers. When I enable CIFS on my vfilers, I get strange low throughput from CIFS (from clients). Like for instance: copying folder containing 20k small files (for website) from CIFS to local disk (on dedicated windows 2008 r2 server) goes with speed ~1.5MB/s even less (drops to 400k/s). First I thought it's because of small files, but I did a test with the same folder, with 2 physical servers (windows 2008 r2) over unc shares - there was throughput of 15MB/s constantly (so 10x more).

Ok, disks are shared for different things as well (14x SATA disks in aggregate) used for NFS as well, but their load is avg 60% (each disk), so even though it should give much better performance...

I am confused about this speed, and what should be expected in that setup. Someone could say: change to FC disks, but still these 2 physical servers also have sata disks inside, and then copying just performs normally (as it should).

Can anyone point me to right direction of this, or even better - what should be expected from CIFS over Netapp? Maybe 1.5MB with small files is max what Netapp can push?

BTW, I did a test with bigger file as well (2GB) and copying was going 20MB/s (so better, but still didn't max 1Gbit connection)... And the same file (2gb) went between 2 servers with speed 100MB/s

I hope someone can help with that thing...

24 REPLIES 24

Re: expected performance from CIFS (throughput)?

Hello there,

I have seen some horrendous performance with CIFS and filers and am currently still trying to nail down an issue where an lacp vif has 15Mb/s to some servers while a multimode works fine, doesnt help load balancing at all. One thing to check (if you can) is change the mode if you are using lacp. SMB2 is disabled by default on filers, turning that on will give you a boost for 2008R2. increasing the cifs tcp window size can increase performance in some circumstances as well.

NetApp have told me in previous support calls regarding CIFS that NetApps implementation of CIFS doesn't match microsoft's and not to expect the same speed, but with SMB2 and a chunky R710 with a decent intel card can pull at 100MB/s (on a good day without lacp turned on, or with if I'm very lucky)

Re: expected performance from CIFS (throughput)?

I just read this post now. Are you still having issue with CIFS performance? If so, please collect perfstat. 1.5MB/s seems too low.

Thanks,

Wei

Re: expected performance from CIFS (throughput)?

Have you done any testing to verify that your NIC is configured properly? Depending on what type of NIC you are using (PCI vs onboard) that could be your issue.

Re: expected performance from CIFS (throughput)?

we are having similar issues. the cifs latency is low. but even when the system isn't used heavily we still can't seem to pass the 30MB/s mark.

i sent a perfstat in the past and they said everything looks good.

Re: expected performance from CIFS (throughput)?

which version of ONTAP are you using? I think perfstat would still be useful. Thanks,   -Wei

Re: expected performance from CIFS (throughput)?

using 7.3.5.1

i'm discussing with my colleagues on enabling smb2 (we have mixed windows environment) and increat tcp window size(still reading on this).

i'm waiting for a colleage to come back from vacation before i do the next perfstat as he is likely to do a lot of data moves/copy

Re: expected performance from CIFS (throughput)?

Great. These are good steps. There are some parameters in the controller you can tune. Thanks,   -Wei

Re: expected performance from CIFS (throughput)?

i'm doing a bit of digging regarding tcp window size.

http://media.netapp.com/documents/tr-3869.pdf tells me to cifs.tcp_window_size, to 2096560

but i can't find anything else on the net about that size. however, i find lots of people using 64240

Re: expected performance from CIFS (throughput)?

My colleague Chowdary answered the following questions:

  • What's the recommendation for tuning cifs.tcp_window_size?

[Chowdary ] with SMB2 the size is 2MB (2097152)


  • What are the best practices to get good CIFS throughput?

[Chowdary ]       1. enabling SMB 2 on the controller and using SMB 2 enabled clients would give better performance than SMBv1.

                                2. Make sure there is not much latency between the Domain controllers and the controller

                                3. Need to make sure that no stale DCs are listed under the preferred DCs.

Re: expected performance from CIFS (throughput)?

so far we've enabled SMB2 and on XP nothing has changed (expected) but on windows 7 desktops i've noticed read performance increase. almost double in some cases.

the next step is to change the tcp windows size

Re: expected performance from CIFS (throughput)?

That's great! Thanks,   -Wei

Re: expected performance from CIFS (throughput)?

This might be of some help. I found it over here:

Checklist for troubleshooting CIFS issues

• Use "sysstat –x 1" to determine how many CIFS ops/s and how much CPU is being utilized

• Check /etc/messages for any abnormal messages, especially for oplock break timeouts

• Use "perfstat" to gather data and analyze (note information from "ifstat", "statit", "cifs stat", and "smb_hist", messages, general cifs info)

• "pktt" may be necessary to determine what is being sent/received over the network

• "sio" should / could be used to determine how fast data can be written/read from the filer

• Client troubleshooting may include review of event logs, ping of filer, test using a different filer or Windows server

• If it is a network issue, check "ifstat –a", "netstat –in" for any I/O errors or collisions

• If it is a gigabit issue check to see if the flow control is set to FULL on the filer and the switch

• On the filer if it is one volume having an issue, do "df" to see if the volume is full

• Do "df –i" to see if the filer is running out of inodes

• From "statit" output, if it is one volume that is having an issue check for disk fragmentation

• Try the "netdiag –dv" command to test filer side duplex mismatch. It is important to find out what the benchmark is and if it’s a reasonable one

• If the problem is poor performance, try a simple file copy using Explorer and compare it with the application's performance. If they both are same, the issue probably is not the application. Rule out client problems and make sure it is tested on multiple clients. If it is an application performance issue, get all the details about:

  • ◦ The version of the application
  • ◦ What specifics of the application are slow, if any
  • ◦ How the application works
  • ◦ Is this equally slow while using another Windows server over the network?
  • ◦ The recipe for reproducing the problem in a NetApp lab

• If the slowness only happens at certain times of the day, check if the times coincide with other heavy activity like SnapMirror, SnapShots, dump, etc. on the filer. If normal file reads/writes are slow:

  • ◦ Check duplex mismatch (both client side and filer side)
  • ◦ Check if oplocks are used (assuming they are turned off)
  • ◦ Check if there is an Anti-Virus application running on the client. This can cause performance issues especially when copying multiple small files
  • ◦ Check "cifs stat" to see if the Max Multiplex value is near the cifs.max_mpx option value. Common situations where this may need to be increased are when the filer is being used by a Windows Terminal Server or any other kind of server that might have many users opening new connections to the filer. What is CIFS Max Multiplex?
  • ◦ Check the value of OpLkBkNoBreakAck in "cifs stat". Non-zero numbers indicate oplock break timeouts, which cause performance problem

Message was edited by: Dave Greenfield

Re: expected performance from CIFS (throughput)?

thats a good checklist but based on what i have seen CIFS in general isn't a fast protocol

Re: expected performance from CIFS (throughput)?

so i just setup a V3140 with ontap 8.1P1

config:

root vol is on its on RG 2+1 (raid 5) sas drives

aggr1 has 5 LUNs from a 20+2 (raid6) sas RG. each LUN is 2TB (1952GB for HDS. based on NetApp doc)

results:

defaultmtu 9000tcp 64240notes
xp1m19s1m5s1m10ssmb2 enabled
xp1m6s1m4s1m14smpx 1124
xp1m8s1m4s1m5sbuf 64k
472 files
w748s42s40s20 folders
w742s41s40ssize on disk 2.39 GB (2,571,423,744 bytes)
w748s40s42svalues are in time
time is not exact (+/- 2s)
w2k827s27s31sdesktops limited to 1g link
w2k829s30s31s
w2k828s26s27s

is this a normal performance expectation?

on a windows 7 with SSD i was able to get 100MB/s performance (so basicaly maxing out the 1gb/s link of the switch)

on w2k8 with a RAID5 (not sure # of disks) i would get the same about

the peak it ever reached was 130MB/s and that didnt last long (this was tested on a single 8GB file)

Re: expected performance from CIFS (throughput)?

CIFS has a number of problems, but chattiness has to be the biggest issue by far. By using read ahead and write behinds along with metadata caching CIFS performance can be more than  quintupled  Here's a performance graph from one network I just happened to have . You'll see NetApp / CIFS see the highest reductions of any application and these numbers are probably low, actually. I've seen CIFS reduction of over 96%.

Review Banner
All Community Forums
Public