AFF

Storage Latency - Shifting Bottleneck

NickSousa
2,925 Views

Enterprise SSDs are delivering around 500+ MB/s, and NVMe SSDs are 10x faster, and 3D XPoint are 1000x faster.

 

Originally SANs were designed with hard disk drives in mind, which meant that the bottleneck was the spinning disks. With SSDs becoming exponentially faster, where is the bottleneck being shifted to? The controllers CPU?

 

Also is the diagram below accurate? What amount of latency is introduced in each layer of this stack?

 

 

Storage Stack

2 REPLIES 2

AlexDawson
2,896 Views

A very detailed question Nick, and I think you'll find there's no "right" answer to the question of where the bottleneck is now. As a data management vendor that sells SAN/NAS products, I think you'll find we generally wouldn't want to describe any of our products as being a bottleneck - it's more a question of where the cost/benefit means that there is a throughput maximum for an implemented configuration.

 

For example, our new AFF A800 system has 100GBit connectivity, and if you scale it out and have an optimized workload, 300GBit data throughput at sub millisecond latency is within reach, at which point, it re-enforces the usual response to questions of "how fast is your system?" with another question - "just how fast do you want it?", or more usually, "how much is making this storage faster worth to your business?"

 

So obviously, with any config, there is a point where no more throughput is possible, and it might be the servers, or it might be the storage, or it might even be the application. As a storage architect, you need to understand roughly what it would be, and as with your diagram below, where might the latency come from. Access patterns and data volume also relate to latency - for example, you might get 100,000 IOPs at 1ms, or 150,000 IOPs at 2ms latency from a generic system.

 

Specifically on your diagram, I'd usually fold 11/12 together, and 4, 5 and 6 too. Realistically, you'd fold 7-10 together for most purposes too, leaving you with needing to understand latency from a vmware host level, a SAN/NAS controller level, and a SAN/NAS backend storage level.

 

Inside vmware, you should be talking microseconds,

between vmware and the SAN/NAS, you're talking milliseconds for most protocols (although our new NVMeOF protocol has microsecond latency, but it isn't meant for vmware)

and then to get data from the SAN/NAS device backend or cache, you're also talking a very small number of milliseconds, for flash devices.

 

Our blog post on the AFF A800 details some very very fast speeds and feeds, for your interest - https://www.netapp.com/us/products/storage-systems/all-flash-array/nvme.aspx

 

Hope this helps!

NickSousa
2,868 Views

There are different workloads, but broadly speaking we can probably group them into "latency focused" (real time transaction processing) or "throughput focused" (big data) IO.

I'm trying to understand where the bottlenecks will be for FC SANs and hyperconverged infrastructure for each of these workloads, now that SSDs are becoming more popular.

FC SAN
For real time transaction processing using all flash arrays, will the bottleneck be the controller CPU?
For big data processing, will the bottleneck be the FC HBAs? (it would take only 2 drives to saturate a 10GbE ethernet network or 4 drives to saturate a 16Gb/s FC network)

Hyperconverged
For real time transaction processing using all flash arrays, will the bottleneck be the VSA/in-kernel controller CPU?
For big data processing, will the bottleneck be the 10/40/100 GbE uplinks?

Public