ONTAP Discussions

Reads faster than writes?

ANDREW_GALLANT
4,223 Views

We have been doing a NFS throughput expairment at work. We noticed a trend. Our reads are faster than our writes. We are using NFS over 10G Cisco 5020.

Here is the script that creates five 5GB files.

server:/NFS_to_virt # cat mkfile.sh

for i in 1 2 3 4 5

do

dd if=/dev/zero of=filename$i bs=1024k count=5120

done

Here is the output of creating the five 5GB files.

server:/NFS_to_virt # ./mkfile.sh

5120+0 records in

5120+0 records out

5368709120 bytes (5.4 GB) copied, 26.7419 seconds, 201 MB/s

5120+0 records in

5120+0 records out

5368709120 bytes (5.4 GB) copied, 24.8181 seconds, 216 MB/s

5120+0 records in

5120+0 records out

5368709120 bytes (5.4 GB) copied, 25.3625 seconds, 212 MB/s

5120+0 records in

5120+0 records out

5368709120 bytes (5.4 GB) copied, 25.2985 seconds, 212 MB/s

5120+0 records in

5120+0 records out

5368709120 bytes (5.4 GB) copied, 25.2001 seconds, 213 MB/s

When I go to perform the read tests I am getting much faster reads that I was able to write. The strange thing is the more random reads I have the faster it is.

server:/NFS_to_virt # sio_ntap_linux        20      20     64k     1g         60       4        /NFS_to_virt/filename1 /NFS_to_virt/filename2 /NFS_to_virt/filename3 /NFS_to_virt/filename4 /NFS_to_virt/filename5

Version: 3.00

SIO_NTAP:

Inputs

Read %:         20

Random %:       20

Block Size:     65536

File Size:      1073741824

Secs:           60

Threads:        4

File(s):        /NFS_to_virt/filename1 /NFS_to_virt/filename2 /NFS_to_virt/filename3 /NFS_to_virt/filename4 /NFS_to_virt/filename5

Outputs

IOPS:           2012

KB/s:           128755 = 125.7373MB

IOs:            120716

Terminating threads ...Killed

server:/NFS_to_virt # sio_ntap_linux        50      20     64k     1g         60       4        /NFS_to_virt/filename1 /NFS_to_virt/filename2 /NFS_to_virt/filename3 /NFS_to_virt/filename4 /NFS_to_virt/filename5

Version: 3.00

SIO_NTAP:

Inputs

Read %:         50

Random %:       20

Block Size:     65536

File Size:      1073741824

Secs:           60

Threads:        4

File(s):        /NFS_to_virt/filename1 /NFS_to_virt/filename2 /NFS_to_virt/filename3 /NFS_to_virt/filename4 /NFS_to_virt/filename5

Outputs

IOPS:           2532

KB/s:           162019 = 158.22168MB

IOs:            151900

Terminating threads ...Killed

server:/NFS_to_virt # sio_ntap_linux        90      20     64k     1g         60       4        /NFS_to_virt/filename1 /NFS_to_virt/filename2 /NFS_to_virt/filename3 /NFS_to_virt/filename4 /NFS_to_virt/filename5

Version: 3.00

SIO_NTAP:

Inputs

Read %:         90

Random %:       20

Block Size:     65536

File Size:      1073741824

Secs:           60

Threads:        4

File(s):        /NFS_to_virt/filename1 /NFS_to_virt/filename2 /NFS_to_virt/filename3 /NFS_to_virt/filename4 /NFS_to_virt/filename5

Outputs

IOPS:           5731

KB/s:           366752 = 358.15625MB

IOs:            343853

Terminating threads ...Killed

If you look at the last one posted above you will see that 90% random is reading way faster than the writes.

1 ACCEPTED SOLUTION

chriskranz
4,223 Views

I'd also strongly recommend (pretty much insist) on you comparing system stats on the NetApp system at the same time as running any performance metrics against it. "sysstat -x 1" or "sysstat -u 1" will give you utilisation stats.

I had a customer that had 2 completely different performance results on 2 cluster nodes. By comparing the NetApp stats, we realised it's because one wasn't the active node and so had more free memory for filesystem cache and so was caching the read tests, therefore producing very different results between the 2 systems. This is difficult to identify from the host side, but easily seen from the NetApp side as the disk read and network transmit activity will be very telling! Also compare network transmit with disk read as you'll be able to see if you are hitting the NetApp read cache or FlashCache if you are repeating any tests and running on deduped data.

But yeah, generic bulk tests are difficult to use as comparison points as the NetApp system isn't really design for that sort of traffic, it shouldn't be as that's not what your actual application traffic will be like.

View solution in original post

4 REPLIES 4

aborzenkov
4,223 Views

For a start, compare single threaded application with single threaded one. Your sio_ntap runs with 4 concurrent threads, so any comparison with single threaded dd is pointless.

Also raw numbers are meaningless without information about your environment (RAM size, mount options, whether you remount file systems between tests, etc).

chriskranz
4,224 Views

I'd also strongly recommend (pretty much insist) on you comparing system stats on the NetApp system at the same time as running any performance metrics against it. "sysstat -x 1" or "sysstat -u 1" will give you utilisation stats.

I had a customer that had 2 completely different performance results on 2 cluster nodes. By comparing the NetApp stats, we realised it's because one wasn't the active node and so had more free memory for filesystem cache and so was caching the read tests, therefore producing very different results between the 2 systems. This is difficult to identify from the host side, but easily seen from the NetApp side as the disk read and network transmit activity will be very telling! Also compare network transmit with disk read as you'll be able to see if you are hitting the NetApp read cache or FlashCache if you are repeating any tests and running on deduped data.

But yeah, generic bulk tests are difficult to use as comparison points as the NetApp system isn't really design for that sort of traffic, it shouldn't be as that's not what your actual application traffic will be like.

shaunjurr
4,223 Views

Hi,

Basically, your file sizes are too small and basically it's all just zeros you are reading anyway.  The system has basically just read ahead enough in the file that it is probably caching almost all of your 4 files.  If you really want a test, then your data set has to be a bit more random (more random data in the files) and a good deal larger and you should be rebooting the server and the filer between runs.  I'm guessing a sysstat -x will show you very little disk activity.

Reads will almost always be faster than writes anyway.

ANDREW_GALLANT
4,223 Views

I made 5 new files using /dev/urandom from there I ran the SIO_Linux script on them and I am getting the same results. at 90%reads 10% writes I am getting 740MB/S and at 20%reads 80%writes I am getting 240/MB/s.

Public