Network and Storage Protocols

nfs tuning for 1G/sec load


Hi All,

         we are running FAS3040 with OnTap (nfs license only)

         one of our volumes has a write throuput of 90-100M per sec. we also using multimode vif to get a network bandwitch of 2GB and a PAM card  .

         this volume has the vol options no_atime_update set to off and the general option nfs.tcp.recvwindowsize set to 262144.

         the NFS clients are running Solaris 10 and are mounting using NFS v.3 (tcp) .

         is there something else we can do to tune the NFS performance? also does OnTap support the Nagle algorythm on the tcp stack?





A few points to check/adjust settings. Some of them are general in nature, but I thought I'd throw them out as I feel they are applicable

  • Look into the rsize and wsize options on the nfs client and set them to match closely to the workload. may be 32k?
  • Look into using the nfs client option actimeo to cache file attributes. Note: This is NOT identical to noac. The option "noac" will not only disable client side file attr caching but will also disable write cachine, which will tank your performance, not increase it.
  • Increase the number of nfs client threads on the solaris box. By default, I believe it is 8. (It's an /etc/system setting, which requires reboot)
  • Has the aggregate been sized appropriately with adequate number of drives? You may want to engage NetApp or your partner SE to ensure that it is the case.
  • For those who are unaware, Nagle algorithm is applied to outgoing packets and is a method of "pooling" packets and sending them when appropriate, instead of sending packets with too small of a payload (lots of packet with not a lot of data). I believe ONTAP does implement Nagle algo for TCP small packet optimization. Look here:
  • If all the components in your data path support this, look into turning jumbo frames on
  • Make changes ONE at a time and baseline the performance.
  • Check the network (and check it again) - Nothing against my esteemed colleagues from the networking world, but settings like flow control, portfast etc all matter when one is trying to tune the system for performance.

Related links:

HTH - and I'd be interested in the progress!