Network and Storage Protocols
Network and Storage Protocols
Hi,
I need understand an issues with latency. For us, is very important reduce a read latency. We have an ESXi (NFS datastores).
Iometer 66% reads and 34% writes. Aggr 11 SAS disk (9D+2P). Why I have a read latency higher than write latency? Only a 1 Esx and 1 VM are running, just for take a test with iometer, no more activity in the filer
CPU | NFS CIFS HTTP Total | Net kB/s | Disk kB/s | Tape kB/s Cache Cache | CP CP Disk OTHER | FCP iSCSI | FCP kB/s iSCSI kB/s |
in | out | read write | read write | age | hit time ty util | in | out | in | out |
26% 1131 | 0 | 0 | 1601 16222 23740 16924 69632 | 0 | 0 | 3 | 96% 71% : | 90% | 10 | 0 | 460 | 0 | 0 | 2118 | 760 |
14% 1618 | 0 | 0 | 1732 18706 37584 13196 | 24 | 0 | 0 | 3 | 81% | 0% - | 74% | 0 | 0 | 114 | 0 | 0 | 1014 1353 |
15% 1634 | 0 | 0 | 1833 19552 37102 14588 | 0 | 0 | 0 | 3 | 82% | 0% - | 78% | 0 | 0 | 199 | 0 | 0 | 593 1652 |
14% 1711 | 0 | 0 | 1815 20223 38218 12676 | 0 | 0 | 0 | 3 | 83% | 0% - | 72% | 1 | 0 | 103 | 0 | 0 | 716 1209 |
15% 1736 | 0 | 0 | 1945 20600 38134 13184 | 24 | 0 | 0 | 3 | 83% | 0% - | 73% | 0 | 0 | 209 | 0 | 0 | 1033 1986 |
28% 1160 | 0 | 0 | 1357 14460 25949 13420 50676 | 0 | 0 | 3 | 94% 76% Hf 88% | 3 | 0 | 194 | 0 | 0 | 552 1924 |
18% 1550 | 0 | 0 | 1807 18867 35195 14636 45808 | 0 | 0 | 3 | 91% 100% :f 86% | 0 | 0 | 257 | 0 | 0 | 774 2389 |
17% 1455 | 0 | 0 | 1571 17128 32754 11800 44828 | 0 | 0 | 3 | 93% 100% :f 80% | 0 | 0 | 116 | 0 | 0 | 115 1549 |
15% 1640 | 0 | 0 | 1750 19400 36122 14008 25108 | 0 | 0 | 3 | 85% 64% : | 83% | 0 | 0 | 110 | 0 | 0 | 558 1044 |
16% 1747 | 0 | 0 | 1936 22271 36834 11900 | 0 | 0 | 0 | 3 | 84% | 0% - | 78% | 0 | 0 | 189 | 0 | 0 | 955 | 605 |
15% 1728 | 0 | 0 | 1856 20421 37963 12592 | 32 | 0 | 0 | 3 | 83% | 0% - | 77% | 3 | 0 | 125 | 0 | 0 | 585 | 594 |
34% | 892 | 0 | 0 | 1092 12045 19955 12572 163888 | 0 | 0 | 4 | 97% 86% Hf 91% | 0 | 0 | 200 | 0 | 0 | 980 1263 |
18% 1551 | 0 | 0 | 1691 18931 33572 12568 5404 | 0 | 0 | 4 | 88% 21% : | 80% | 0 | 0 | 140 | 0 | 0 | 501 | 566 |
14% 1613 | 0 | 0 | 1746 18875 37702 13732 | 24 | 0 | 0 | 4 | 82% | 0% - | 79% | 0 | 0 | 133 | 0 | 0 | 504 2666 |
Thanks in advance.
Solved! See The Solution
I'd definitley have a conversation with your NetApp Technical account team and see what they recommend. A few things to consider:
I believe IOMeter tends to read/write randomized data, so controller caching is less effective. THis means the 8ms response time you are seeing during your IOMeter tests may be a worse case than what you will see in production.
I'd ask about Flash Cache if read performance is a large issue for you. Flash Cache added to your controllers will give you a second, very fast layer of read only cache. It should improve overall read performance in most cases.
Less than 1 ms?!? SSD... maybe. Either pure SSD aggregates (Expensive!!) or Flash Pool aggregates with SSD on the front end and SAS on the back end. This is very high end stuff.
FAS 3250s aren't slow machines! They have lots of processing power. The larger units have more memory and expansion, and perhaps more processing power, but not much. You won't gain performance from a larger head unless you are stressing the 3250 performance-wise in the first place.
Hope that helps!
Is your question simply "why is my read latency higher than my write latency"? In that case, that's pretty normal for WAFL. WAFL is write-optimized, meaning your application gets an ack back from WAFL once it sends the write to the controller and it's written to memory (and journaled to NVRAM). That's pretty fast,and doesn't yet involve disks. On your next CP, the data is flushed to disk.
Reads may be resident in memory (thus your Cache hit rate), but often times has to come from disk.
This document, though a bit old, does explain this in a bit more detail:
http://www.netapp.com/templates/mediaView?m=wp_3002.pdf&cc=us&wid=15141511&mid=15141511
Cool bsti :>)
My VDI users needs fast read......
Where could I find more recent docs which could help making my VDI user more happy on the slow FAS3250?
Thanks & Happy Holiday
Henry
Is 8ms read on NFS too slow? What performance are you expecting for that workload?
Ha-ha bsti,
My Christmas wish is to reduce the NFS read latencey down to 1 ms :>)
What action could be taken to make my wish come ture?
Cheers
Henry
I'd definitley have a conversation with your NetApp Technical account team and see what they recommend. A few things to consider:
I believe IOMeter tends to read/write randomized data, so controller caching is less effective. THis means the 8ms response time you are seeing during your IOMeter tests may be a worse case than what you will see in production.
I'd ask about Flash Cache if read performance is a large issue for you. Flash Cache added to your controllers will give you a second, very fast layer of read only cache. It should improve overall read performance in most cases.
Less than 1 ms?!? SSD... maybe. Either pure SSD aggregates (Expensive!!) or Flash Pool aggregates with SSD on the front end and SAS on the back end. This is very high end stuff.
FAS 3250s aren't slow machines! They have lots of processing power. The larger units have more memory and expansion, and perhaps more processing power, but not much. You won't gain performance from a larger head unless you are stressing the 3250 performance-wise in the first place.
Hope that helps!
Thanks you, for extensive help
Good Idea bsti,
I'll chase my NetApp Technical account team & see what they recommend/offering:>)
Happy Friday
Henry
I just want to understand the high latency of reading as opposed to as low write latency.
Sorry, I got you confused with the other poster.
I think the document on WAFL will explain why write latency is so low. On a read, you are hitting disk, which takes time to serve up to the client. Writes don't usually have to wait for disk (unless you are in a back-to-back CP situation, which is very bad). Once it hits memory, your client gets an acknowledgement from the controller.
I've always found it strange to get used to as well that NetApp tends to write faster than read, which is not what one traditionally sees in storage hardware.