I need understand an issues with latency. For us, is very important reduce a read latency. We have an ESXi (NFS datastores).
Iometer 66% reads and 34% writes. Aggr 11 SAS disk (9D+2P). WhyI have aread latencyhigher thanwrite latency? Only a 1 Esx and 1 VM are running, just for take a test with iometer, no more activity in the filer
Is your question simply "why is my read latency higher than my write latency"? In that case, that's pretty normal for WAFL. WAFL is write-optimized, meaning your application gets an ack back from WAFL once it sends the write to the controller and it's written to memory (and journaled to NVRAM). That's pretty fast,and doesn't yet involve disks. On your next CP, the data is flushed to disk.
Reads may be resident in memory (thus your Cache hit rate), but often times has to come from disk.
This document, though a bit old, does explain this in a bit more detail:
I'd definitley have a conversation with your NetApp Technical account team and see what they recommend. A few things to consider:
I believe IOMeter tends to read/write randomized data, so controller caching is less effective. THis means the 8ms response time you are seeing during your IOMeter tests may be a worse case than what you will see in production.
I'd ask about Flash Cache if read performance is a large issue for you. Flash Cache added to your controllers will give you a second, very fast layer of read only cache. It should improve overall read performance in most cases.
Less than 1 ms?!? SSD... maybe. Either pure SSD aggregates (Expensive!!) or Flash Pool aggregates with SSD on the front end and SAS on the back end. This is very high end stuff.
FAS 3250s aren't slow machines! They have lots of processing power. The larger units have more memory and expansion, and perhaps more processing power, but not much. You won't gain performance from a larger head unless you are stressing the 3250 performance-wise in the first place.
I think the document on WAFL will explain why write latency is so low. On a read, you are hitting disk, which takes time to serve up to the client. Writes don't usually have to wait for disk (unless you are in a back-to-back CP situation, which is very bad). Once it hits memory, your client gets an acknowledgement from the controller.
I've always found it strange to get used to as well that NetApp tends to write faster than read, which is not what one traditionally sees in storage hardware.