My read/write numbers don’t make sense to me. When I write a 5G file to the NFS share I am getting an average of 270MB/s. To write the file I am doing ‘dd if=/dev/zero of=filename1 bs=1024k count=5120’
This is where I am confused. Since the client is connecting to a fex at 1gigabit Ethernet, it is telling me that I am writing faster than the line speed. So gigabit Ethernet is 1,000,000,000bits/s when you convert the 270MB/s to bits I am getting 2,122,317,824.
What am I missing here? How do I figure out the accurate number for what I am actually writing at?
I was unawair that someone put a 10G card in my machine and that is why my numbers were higher than they should have been for the transfer rate. I am still confused as why my reads are around 740MB/s and writes are at 240MB/s but we are working with the NetApp COE to figure that out.
You might want to experiment with smaller write sizes more tuned to your ethernet frame sizes. I would hope you are using jumbo frames. I'd have to look at some earlier work of mine to see how such tests went in an "earlier life"
I guess I would try to watch sysstat running during your test as well and see if you can get some useful stats out of 'nfsstat -h '. Zero the stats between runs with '-z'.
The only other thing would be to try a number of hosts if the sysstat is showing that you aren't maxing out your disks, then it would be good to see how many clients and how much traffic you could push before you saturate the filer. It would eliminate the filer as a problem area and leave you to tune the linux nfs and tcp subsystems.
Turning on 'no_atime_update' on the filer volume would probably get you a bit more performance already, if you haven't done so.
Is this ONTap 8.x or 7.x?
I know I'm not much help, but tuning interests me a bit.