Part 2 of a 4-part series on choosing media for NetApp FAS storage
By Tushar Routh, Sr. Manager, Storage Products, NetApp
In the previous post in this series, I looked at some of the available HDD options. This time I want to focus on what’s happening with solid-state drives (SSDs) and how you can deploy them either for persistent storage or as part of a Flash Pool. Future posts will dig into deployment guidelines and take a look at NetApp Storage Encryption for the security conscious.
Capacities of SSDs have been growing fast in recent years. While this growth will continue, the flash memory used in SSDs is running up against the same lithography limits as other types of semiconductors. The path to increased capacity for flash has been to shrink the feature size on each chip to deliver more capacity per chip. Current NAND flash devices are using a 2Xnm-class process (20-29nm feature size) and rapidly moving to a 1Xnm process (10-19nm feature size). At the same time, Enterprise SSDs have transitioned from single-level cell (SLC) flash components to multi-level cell (MLC) devices.
As with HDDs, NetApp is introducing new SSDs on an aggressive schedule. We currently offer 200GB and 800GB capacities and will be adding more options in coming months to provide a broader range of choices for greater storage optimization.
Today, you can use SSDs either as persistent storage—like any other type of drive—or as part of a Flash Pool that combines HDDs with SSDs to accelerate random reads and writes. Here are some things to keep in mind for each type of deployment:
NetApp likes to say that Flash Pool combines the capacity of HDD with the performance of flash, but it’s really more than that. SSDs provide the most benefit for random, transactional workloads. HDDs are actually quite good at sequential workloads, especially on a cost basis. Combining the two types of media in a single aggregate lets you benefit from SSDs for their transactional performance and HDDs for sequential performance without having to know the exact I/O behavior of every workload when you are architecting storage.
While there’s no question that SSDs are changing the game when it comes to storage subsystem design, flash memory has some limitations. Flash cells can only endure a limited number of write cycles before they wear out. Newer technologies such as phase-change memory and resistive RAM (RRAM) are being discussed as a way to overcome these limitations, but it’s too early to tell which if any of these will emerge as a clear leader. In any case, it’s clear that solid-state storage in some form will continue to have an expanding role in the overall storage market, especially as the economics improve.
The next post will provide a few guidelines for deploying an effective storage subsystem including shelf and interconnect details.