Network and Storage Protocols

Understanding transfer rates.

ANDREW_GALLANT
4,401 Views

Here is my setup. Using NFS I have a 3170 connected to a Nexus 5020 via 10G . My client is connected via Ethernet to the FEX which is connected to the 5020 via fiber.

3170—10G---5020---10G---FEX---1GEthernet---client.

My read/write numbers don’t make sense to me. When I write a 5G file to the NFS share I am getting an average of 270MB/s. To write the file I am doing ‘dd if=/dev/zero of=filename1 bs=1024k count=5120’

This is where I am confused. Since the client is connecting to a fex at 1gigabit Ethernet, it is telling me that I am writing faster than the line speed. So gigabit Ethernet is 1,000,000,000bits/s when you convert the 270MB/s to bits I am getting 2,122,317,824.

What am I missing here? How do I figure out the accurate number for what I am actually writing at?

6 REPLIES 6

shaunjurr
4,401 Views

Hi,

What is telling you that you are writing at 270MB/s?  How do you arrive at that number?

'dd' is probably not the best tool in the world for testing I/O either.  IIRC, you are using linux and it might just be lying to you. 

You probably would get more for you time by reading up on recommended mount settings for NFS using linux.

aborzenkov
4,401 Views

I believe I asked already – how much memory your client has and what are NFS mount options.

ANDREW_GALLANT
4,401 Views
root@server:/root > free -m
             total       used       free     shared    buffers     cached
Mem:         24102       5944      18158          0        176       5462
-/+ buffers/cache:        305      23797
Swap:         8197          0       8197
root@server:/root >
from /etc/fstab
10.130.36.239:/vol/fit_ENG_perftest1 /NFS_to_phys               nfs     rw,bg,hard,intr,rsize=65536,wsize=65536,tcp,vers=3,timeo=600    0 0

aborzenkov
4,401 Views

Well … with 24G of memory and no direct IO you are basically writing to server cache.

ANDREW_GALLANT
4,401 Views

I was unawair that someone put a 10G card in my machine and that is why my numbers were higher than they should have been for the transfer rate. I am still confused as why my reads are around 740MB/s and writes are at 240MB/s but we are working with the NetApp COE to figure that out.

shaunjurr
4,401 Views

Hi,

You might want to experiment with smaller write sizes more tuned to your ethernet frame sizes.  I would hope you are using jumbo frames.  I'd have to look at some earlier work of mine to see how such tests went in an "earlier life"

I guess I would try to watch sysstat running during your test as well and see if you can get some useful stats out of 'nfsstat -h '. Zero the stats between runs with '-z'.

The only other thing would be to try a number of hosts if the sysstat is showing that you aren't maxing out your disks, then it would be good to see how many clients and how much traffic you could push before you saturate the filer.  It would eliminate the filer as a problem area and leave you to tune the linux nfs and tcp subsystems.

Turning on 'no_atime_update' on the filer volume would probably get you a bit more performance already, if you haven't done so. 

Is this ONTap 8.x or 7.x?

I know I'm not much help, but tuning interests me a bit.

Public