By Bruce Van **bleep**, STEC, Inc.
In today’s typical high-performance business environment, the number of users concurrently accessing applications places a significant performance demand on storage subsystems competing for resources on virtual machines. This translates into more expensive storage for the increased Input-Output Operations per Second (IOPS) required to deliver application performance.
Can Your Storage Throughput Scale to Meet Business Demands?
Historically, as application demand grows, additional virtual machines (VMs) or ESX hosts are installed. As these hosts increase the IOPS demand on the Filer hard disk drive (HDD) storage subsystem, additional HDD drive trays are added to scale up to the desired throughput rate. The NetApp controller hardware architecture has a maximum throughput rate of 2GB/s, and Romley-class servers can sustain a 5GB/s rate, so the storage subsystem becomes a bottleneck despite adding more drive trays. If an application requires 10,000 IOPS, a minimum of 50 HDDs are required. In contrast, a single SSD could deliver this level of throughput with relative ease. Clearly, an HDD scaling model rapidly breaks down when cost, space, power and the continuous requirement for greater throughput are evaluated. A recent ESG report highlights more than 2X growth in VMs per ESX host from 10 (2008) to 24 (2012), and it’s expected to continue at the same rate over the next four years, thereby compounding the mismatch between the data rate a host server can request and what a Filer can deliver.
This performance gap is further exacerbated when the read/write mix through the ESX host is randomized. The random workload heavily taxes the storage subsystem because HDDs incur milliseconds of latency to seek to the proper location on the disk. SSD latency is based on flash memory access time, which is up to 1000 times faster than HDD seek time.
Maximize Existing NetApp Storage Investments with Caching Software
A simple, effective and inexpensive solution to extending the useful life of an existing NetApp investment is to add SSD read caching to the NetApp storage architecture. There are two components required: a SSD, and a caching software application that resides in the host. The cache is implemented with PCI Express SSD technology, which eliminates the costly HDD latency penalty.
SSDs are capable of 40,000 to 100,000 random IOPS, making them ideally suited to solve the highly randomized workload generated by the ESX host. Complementing the SSDs, a software caching application in the Guest VM or ESX host provides the intelligence to automatically migrate copies of frequently accessed data to the SSD from the Filer’s HDD storage. STEC’s EnhanceIO SSD Cache Software delivers the requested data to the application from the low latency, high throughput SSD, not from the Filer’s slower HDDs.
Learn more about joint solutions from STEC and NetApp that can help you accelerate applications and improve performance.
Bruce Van **bleep** is director of sales for content services and social media at STEC, Inc., a leading global provider of solid-state storage solutions. Previously, he served in sales and marketing positions at Adaptec and Brocade. He holds a BSEE degree from Cal Poly San Luis Obispo.