Network and Storage Protocols

Write and Read Latency issues

j_garrido
20,453 Views

Hi,

I need understand an issues with latency. For us, is very important reduce a read latency. We have an ESXi (NFS datastores).

Iometer 66% reads and 34% writes. Aggr 11 SAS disk (9D+2P). Why I have a read latency higher than write latency? Only a 1 Esx and 1 VM are running, just for take a test with iometer, no more activity in the filer

CPUNFS   CIFS   HTTP   Total Net   kB/sDisk   kB/sTape   kB/s  Cache  CacheCP  CP  Disk   OTHERFCP  iSCSI FCP   kB/s   iSCSI   kB/s

                                  inoutread  writeread  writeagehit  time  ty  util                        inout  inout

26%   1131  01601   16222  23740   16924  69632   0 3 96%   71%  :90%  10  0460   02118760

14%   1618  01732   18706  37584   13196 24   0 3 81%0%  -74%   0114   01014   1353

15%   1634  01833   19552  37102   14588  0   0 3 82%0%  -78%   0199   0 593   1652

14%   1711  01815   20223  38218   12676  0   0 3 83%0%  -72%   0103   0 716   1209

15%   1736  01945   20600  38134   13184 24   0 3 83%0%  -73%   0209   01033   1986

28%   1160  01357   14460  25949   13420  50676   0 3 94%   76%  Hf   88%   0194   0 552   1924

18%   1550  01807   18867  35195   14636  45808   0 3 91%  100%  :f   86%   0257   0 774   2389

17%   1455  01571   17128  32754   11800  44828   0 3 93%  100%  :f   80%   0116   0 115   1549

15%   1640  01750   19400  36122   14008  25108   0 3 85%   64%  :83%   0110   0 558   1044

16%   1747  01936   22271  36834   11900  0   0 3 84%0%  -78%   0189   0 955605

15%   1728  01856   20421  37963   12592 32   0 3 83%0%  -77%   0125   0 585594

34%892  01092   12045  19955   12572 163888   0 4 97%   86%  Hf   91%   0200   0 980   1263

18%   1551  01691   18931  33572   12568   5404   0 4 88%   21%  :80%   0140   0 501566

14%   1613  01746   18875  37702   13732 24   0 4 82%0%  -79%   0133   0 504   2666

Thanks in advance.

1 ACCEPTED SOLUTION

bsti
20,453 Views

I'd definitley have a conversation with your NetApp Technical account team and see what they recommend.  A few things to consider:

I believe IOMeter tends to read/write randomized data, so controller caching is less effective.  THis means the 8ms response time you are seeing during your IOMeter tests may be a worse case than what  you will see in production. 

I'd ask about Flash Cache if read performance is a large issue for you.  Flash Cache added to your controllers will give you a second, very fast layer of read only cache.  It should improve overall read performance in most cases.

Less than 1 ms?!?  SSD... maybe.  Either pure SSD aggregates (Expensive!!) or Flash Pool aggregates with SSD on the front end and SAS on the back end.  This is very high end stuff.

FAS 3250s aren't slow machines!  They have lots of processing power. The larger units have more memory and expansion, and perhaps  more processing power, but not much. You won't gain performance from a  larger head unless you are stressing the 3250 performance-wise in the first place. 

Hope that helps!

View solution in original post

9 REPLIES 9

bsti
20,453 Views

Is your question simply "why is my read latency higher than my write latency"?  In that case, that's pretty normal for WAFL.  WAFL is write-optimized, meaning your application gets an ack back from WAFL once it sends the write to the controller and it's written to memory (and journaled to NVRAM).  That's pretty fast,and doesn't yet involve disks.  On your next CP, the data is flushed to disk. 

Reads may be resident in memory (thus your Cache hit rate), but often times has to come from disk. 

This document, though a bit old, does explain this in a bit more detail:

http://www.netapp.com/templates/mediaView?m=wp_3002.pdf&cc=us&wid=15141511&mid=15141511

HENRYPAN2
20,453 Views

Cool bsti :>)

My VDI users needs fast read......

Where could I find more recent docs which could help making my VDI user more happy on the slow FAS3250?

Thanks & Happy Holiday

Henry

bsti
20,453 Views

Is 8ms read on NFS too slow?  What performance are you expecting for that workload?

HENRYPAN2
20,453 Views

Ha-ha bsti,

My Christmas wish is to reduce the NFS read latencey down to 1 ms :>)

What action could be taken to make my wish come ture?

Cheers

Henry

bsti
20,454 Views

I'd definitley have a conversation with your NetApp Technical account team and see what they recommend.  A few things to consider:

I believe IOMeter tends to read/write randomized data, so controller caching is less effective.  THis means the 8ms response time you are seeing during your IOMeter tests may be a worse case than what  you will see in production. 

I'd ask about Flash Cache if read performance is a large issue for you.  Flash Cache added to your controllers will give you a second, very fast layer of read only cache.  It should improve overall read performance in most cases.

Less than 1 ms?!?  SSD... maybe.  Either pure SSD aggregates (Expensive!!) or Flash Pool aggregates with SSD on the front end and SAS on the back end.  This is very high end stuff.

FAS 3250s aren't slow machines!  They have lots of processing power. The larger units have more memory and expansion, and perhaps  more processing power, but not much. You won't gain performance from a  larger head unless you are stressing the 3250 performance-wise in the first place. 

Hope that helps!

j_garrido
20,453 Views

Thanks you, for extensive help

HENRYPAN2
20,453 Views

Good Idea bsti,

I'll chase my NetApp Technical account team & see what they recommend/offering:>)

Happy Friday

Henry

j_garrido
20,453 Views

I just want to understand the high latency of reading as opposed to as low write latency.

bsti
20,453 Views

Sorry, I got you confused with the other poster. 

I think the document on WAFL will explain why write latency is so low.  On a read, you are hitting disk, which takes time to serve up to the client.  Writes don't usually have to wait for disk (unless you are in a back-to-back CP situation, which is very bad).  Once it hits memory, your client gets an acknowledgement from the controller.

I've always found it strange to get used to as well that NetApp tends to write faster than read, which is not what one traditionally sees in storage hardware.

Public