ONTAP Discussions

FAS2040: Anyone with more than 80MB/s CIFS performance?

RAJA_SUBRAMANIAN
4,611 Views

I have a FAS2040 filer with is only delivering a max of 80MBytes/s of CIFS throughout per controller.  I used 3 WinXP/Win7 clients simultaneously and benchmarked by using TeraCopy to read/write large files.

The maximum CIFS bandwidth this FAS2040 delivers is 80MB/s total performance per controller.  We are having Win2k3 file servers in our environment which are delivering better performance.  I've gone through the tuning docs and tried nearly recommended setting, I don't know what else I can tweak.

What is the CIFS performance I can expect from my filer?  Are there any tweaks I can try to deliver higher performance?

Filer config:

  • Dual controller FAS2040 with 24 x 450GB SAS disks.
  • Each controller has 9D+2P+1S with single 3TB aggr0, vol0 (150GB) for OnTAP 8.0.1 7-mode, and vol1 (2.7TB) for data.
  • There is 90% free space on all volumes, and there no snapshots, no dedupe, no CIFS auditing, etc.
  • For each controller I have a single VIF with 4 x 1GigE network interfaces, in-bound and out-bound CIFS traffic is flowing across all interfaces (config attached)
  • Increased CIFS TCP window size and buffers (config attached)

Thanks for your suggestions!

5 REPLIES 5

shaunjurr
4,611 Views

Hi,

Try enabling smb2 via 'options cifs.smb2.enable on' on the filer CLI.  This should help on the win7 clients.

How far away are your clients?  The window sizes might have to be increased a bit still, depending on distance and expected bandwidth.

You can only get a theoretical maximum of 128MB/s over a 1 Gbit/s link.  If your clients don't get distributed evenly among the 4 interfaces in your ifgrp, then they will share the bandwidth of a shared interface.

Normally, (and as recommended probably hundreds of times in the archives), try running 'sysstat -x 1' on the command line to see if you are pushing your disks to the maximum during operations.

Did you create the aggregate with all of the disks before you added data, or did you add data before you added the rest of the disks?  You might need to reallocate your data across all disks if the latter is true.

RAJA_SUBRAMANIAN
4,611 Views

Hi,

Thank you for youre response!

shaunjurr wrote:

Try enabling smb2 via 'options cifs.smb2.enable on' on the filer CLI.  This should help on the win7 clients.

Most of my clients are WinXP. I tired enabling SMB2 on Win7 but I can't see any appreciable performance improvement for single or multiple clients.

How far away are your clients?  The window sizes might have to be increased a bit still, depending on distance and expected bandwidth.

For benchmarking, I connected all Windows clients to the same core switch as the filer.  I have a Foundry/Brocade core switch which is wire speed.

Normally, (and as recommended probably hundreds of times in the archives), try running 'sysstat -x 1' on the command line to see if you are pushing your disks to the maximum during operations.

Will try this during our maintenance window this weekend and report 🙂  Thanks.

Did you create the aggregate with all of the disks before you added data, or did you add data before you added the rest of the disks?  You might need to reallocate your data across all disks if the latter is true.

Aggregates were created on an empty filer.  No disk config changes have been done post deployment.

Thanks again for your help.

Darkstar
4,611 Views

80mb is pretty much for CIFS, you'll have a hard time increasing this even more. NFS and iSCSI can hit 100-110mb/s on a single link, but I have never seen CIFS coming close to that number (even with SMB2)

-Michael

peter_lehmann
4,611 Views

Yes, CIFS is not a performance protocol by design. And it is quite possible that you get better numbers with a MS Server.

 

Creating a multi VIF with 4 interfaces doesn't neccessarily mean, that you automagically get 4 times the performance of a 1 gigabit interface.

 

 

 

 

Hope this helps,

Peter

tomasz_golebiewski
4,611 Views

Well, we have FAS2020 and FAS2040. On FAS2020 never reached more than 30-40 MB/s.

On FAS2040 yes, we were able to write 70MB/s and read 92 MB/s from it over CIFS.

Public