By Keith Aasen, Solutions Architect, NetApp
This fall, NetApp completely refreshed its FAS hardware line. Although some might argue that proprietary hardware is a thing of the past, I would argue that plenty of innovation remains in the process of assembling hardware systems out of industry-standard components. A prime example is the addition of Non-volatile Memory Express (NVMe) devices on the newly released storage controllers. I thought I would take a bit more of a techie deep dive into the NVMe world, how we are using it today and where we go with it in the future.
To fully appreciate NVMe devices, we need to go back in time. Although there were multiple CPU-to-storage protocols in the early days of computing, one that gained popularity and therefore standardization was the Small Computer System Interface (SCSI). This standard allowed efficient communication between a computer’s CPU and the locally attached disk drives.
When storage arrays came along, we needed a means to attach large numbers of drives to the CPUs while maintaining use of the SCSI standard. From this need, Serial Attached SCSI (SAS) was born. This development allowed continued use of the SCSI command set while attaching numerous drives to a shared bus.
And so it was for the past couple decades. The actual drive technology could be changed (SATA or SAS drives), but the bus and command set remained the same. The addition of flash solid-state drives (SSDs), however, has prompted change. You can place an SSD onto a SAS interface and bus (which most vendors, NetApp included, do today). But the 550MBps to 600MBps limit on a SAS interface is a bottleneck to the SSD, which can deliver much more.
NVMe removes this bottleneck. By using a new set of commands, a massive number of I/O queues, and direct connection to the CPUs through the PCI bus, an NVMe-connected SSD can push upward of 4000MBps. That’s quite a boost over the 550MBps on a SAS interface.
Such an improvement almost always results in a massive technology shift, and NVMe will be no exception. But there is one final hurdle. Today the NVMe standard requires the direct connection of the drive to the system’s PCI bus. That’s fine for a laptop but is not ideal when designing a large storage system. We need a means to place these drives on a shared bus without losing any of the massive bandwidth and incredibly low latency (6us on average). Several standards are in the works, for example, RoCE and iWARP using converged Ethernet and FC-NVMe using Fibre Channel. NetApp is an NVMe promoter (oddly, one of the few storage companies on the project), and as such we have direct access to the protocols as they are ratified. As soon as they are ready for prime time, we will be there.
So, is NVMe only a promise of a future technology? Hardly. As I mentioned, NVMe is available today on system-connected drives, and so all the new NetApp® systems (FAS2600, FAS8200, and FAS9000) come with an NVMe-connected drive.
We use this drive as a NetApp Flash Cache™ device, accelerating the entire storage system. The massive I/O capability and low latency allow the device to accelerate storage workloads well beyond what the underlying media could deliver.
NVMe is also the protocol that will open up usage of the next-generation storage medium that is known as storage class memory. To recognize the benefits of this improved medium, an ultralow-latency protocol is required. Having these ports on the new storage controllers opens up possibilities to accelerate even all-flash arrays into new realms of performance without a complete change of the underlying media.
While other vendors are making noise about NVMe, NetApp is deploying it today and is working with the industry to make it a standard for tomorrow.