The FIX is in

If the VIX = Stock Market Volatility Index, then I declare that FIX = Flash (Industry Volatility) Index. And judging by the latest flurry of activity, the FIX is very much in.  Expect much more noise and activity in the Enterprise Flash Market throughout 2012 as business leaders in our industry digest the scope of the NAND Flash disruption.  Hint – it won’t be over this year and when we’re done a few years from now we won’t recognize the current Enterprise Storage hierarchy.


When I served as Vice-Chair of SNIA’s Solid-State Storage Initiative almost 4 years ago, our cross-vendor team of storage specialists realized the disruption upon us would last at least a decade.  The early focus was on SSD’s and related FTL firmware, but rapidly evolved to new form factors like PCI, new densities incorporating MLC and sustainable performance specifications.


As the industry rapidly evolved, DRAM Storage Arrays quickly gave way to NAND Flash-based successors and two distinctive camps ultimately emerged: (Updated lists below May 18th thanks to my commenters!)


  • Performance Camp – Focusing on the consistently low microsecond response times and multiple GB/sec of throughput – Avere, CacheIQ, Dataram, GridIron, Kaminario, Texas Memory Systems, Violin & WhipTail deliver hard-core performance in a Storage Array form factor, while Fusion-io and Virident do the same in a PCI Card form factor inside the server.  Infiniband-attached all-SSD NetApp E-Series arrays also fall into this category, although that is a relatively recent capability of our HPC product line.


  • Value Camp – This group of vendors offers a rich software layer on top of NAND Flash which sacrifices ultimate performance in favor of very high performance combined with advanced storage efficiency and data management features – sometimes including hybrid configs of SSD’s alongside spinning HDD’s.  Astute Networks, NexGen & Nimble Storage, Nimbus Data Systems, Pure Storage, SolidFire, StarBoard, Tintri and the formerly independent XtremIO fall into this camp.  So do FlashCache-enabled NetApp FAS Storage Arrays.


(BTW – These are not meant to be exhaustive lists above, just Flash-focused companies that I’m aware of. Feel free to comment below with others I haven’t listed but should investigate)



Fad or Trend?

In fact, these two camps are indicative of where we at NetApp believe the storage industry as a whole is going.  We predict the emergence of a:


  • Performance (aka IOPS) Layer– emerging in the storage stack storing an increasingly large working set of hot data, often very close to the application itself running on the server.  This will be complemented by a:
  • Capacity Layer– which will seamlessly integrate with the Performance Layer above, while directly addressing the economics of storing, protecting and managing the orders of magnitude more (colder) Big Data than can be cost-effectively stored on solid-state media.



VST is Future Ready

NetApp’s Virtual Storage Tier architecture was born out of our unique position in the middle ground between these two camps.  Leveraging Data ONTAP’s well-established lead in primary storage efficiency, NetApp has been able to drastically reduce the effective cost of NAND Flash-accelerated storage via deduplicated and thinly-cloned FlashCache blocks delivering high performance FAS Arrays to our customers.  After tens of thousands of deployments, NetApp boasts the highest Flash attach rates in the industry.


But we’ve long ago recognized the Flash disruption doesn’t stop at the array.  Application performance demands will continue to pull Flash up the storage stack and into the server.  Over time, even storage semantics themselves will give way to persistent (non-volatile) memory semantics enabling simpler and faster high performance, real-time applications.  But Storage Class Memory [PDF] and NetApp’s future as a memory vendor will have to wait for another blog


In the meantime, Big Data capacity trends and Enterprise-level data management, availability, protection and efficiency requirements will further entrench non-Flash spinning disk (and resurgent tape) media into a scalable Capacity Layer which will serve as the foundation of this new storage stack, managing an organization’s “single source of truth”, regardless of performance.



Playing Offense or Defense?

EMC will doubtless cast their acquisition of XtremIO as a bold move ushering in yet another new tier of storage (0.5?) and primary storage option amongst a crowded portfolio.  However there is widespread speculation EMC felt the pressure to bolster that fragmented portfolio against NetApp’s highly anticipated Goldilocks Scale-Out Virtual Storage Tier.



Collateral Damage

As I alluded in my introduction above, when the Flash disruption is complete over the next few years (before the Storage Class Memory reverberation is expected) the once lucrative Tier1 Frame Array market will be literally disintegrated.  ESCON/FICON attached arrays will continue to leech off the relatively moribund mainframe market.  Fault tolerant data persistence functionality will move from Tier1 Frame Arrays up the stack all the way to the application layer.  As we’ve already established, performance capability will move to the server / host layer, leaving data management, protection and efficiency to a shared storage architecture.  A Capacity-optimized layer will dominate here, leaving precious little room for archaic Tier1 Frame Arrays.



Playing the Long Game by Looking Ahead

Given all the existing and anticipated Flash-based Storage Industry disruption, it’s somewhat shortsighted to map these upcoming new Solid-State Storage layers and form factors into existing categories derived from the HDD era.  Despite any shiny new razzle-dazzle, don’t expect the storage arrays of the future to be purchased for performance or availability characteristics.  And don’t expect high performance solid-state storage (Flash or otherwise) to provide cost-effective end-end data management, protection or efficiency capabilities. (Object / Archive Storage anyone?)  Being pulled apart from both the Performance and Capacity ends, the era of the all-encompassing Tier1 Storage Frame Array is rapidly sunsetting.


Only comprehensive Unified and Extensible Storage architectures like NetApp’s VST provide a future-ready framework for planning, building and running an infrastructure flexible enough to cost-effectively incorporate new storage media technical trends in a consistent, predictable and sustainable manner.


Starboard Storage is one you did not mention in the value camp. Hybrid SSD and HDD array for mixed workloads - SAN and NAS

Can you please elaborate on how HDD-based Eseries arrays and Filers can be grouped in with SSD-based arrays? I thought they were fundamentally different things.

Which camp would Cache IQ, Dataram and GridIron Systems fit into?


@Anon - thx for the Starboard Storage tip - I'll check them out and update the blog.

@John - Great catch!  I forgot about CacheIQ and GridIron (shame on me given some NetApp expats there ) I'll also check out Dataram and update above accordingly.

@Nils - Very good question. Let me devote a separate comment to the answer.


@Nils - the Performance Group of all-Flash arrays listed in my blog above tend to have an architecture with a streamlined control path which contains the proprietary Flash Translation Layer and other media endurance / RAID / failover logic.  That control path is also optimized for fast metadata-only lookup (i.e. lots of volatile DRAM) and usually executes on a general purpose CPU.  The data path OTOH is based on a set of custom ASIC's and/or FPGA's in order to rapidly and efficiently move the encapsulated data blocks back & forth from the fast underlying Flash media, in SSD or other form factors.  Coincidentally, due to the HPC target market, NetApp's E-Series Arrays have a similar architecture for both the control path and data paths, hence their association with that group.

The Value Group of all-Flash and hybrid Flash/Disk arrays has less commonality around architecture but more around functionality.  The overriding feature set they boast is a level of Storage Efficiency (often based on Dedupe, Thin Provisioning & Cloning) not available from HDD-based primary storage but easier to implement with more latency and IOPS-forgiving SSD's / Flash media.  This storage efficiency brings their effective cost down to the ballpark of Enterprise HDD Arrays, except for one . NetApp is as usual the technical outlier among the Enterprise SAN/NAS Array camp since we pioneered the concept of fast Dedupe, Thin Provisioning / Cloning and of course Snapshots) on HDD-based primary storage.  We then extended that directly to Flash, in a hybrid configuration with HDD's.  More details available on Vaughn's blog here:

You can expect us to further extend this architecture down to the SSD layer as well as upto the server / host PCI Flash layer with some announcements later this year.

The bottom-line is NetApp's Primary Storage Efficiency leadership translates directly to the Value Flash camp today, with the added safety of a multi-decade battle-hardened software pedigree that none of those start-ups will ever catch up to or match going forward, because of course we're not standing still

Nice writeup Val! It'll be interesting to watch the EMC tap dance this week as they introduce a product not yet shipping which has the potential to cut off their sacred cash cow

In addition to the nice link to Nigel's blog above, it seems Greg Schulz agrees with the two of you.

Great post. Just discovered it from The Register article. Note the latest NTAP vs EMC growth comparisons

Well articulated vision. It makes perfect sense. It will be interesting to see what capabilities you provide with Fusion-io PCI cards for our FAS3270/Dell R810 environment.  Would be a welcome boost for our BI environment..

Curious on your perspective on capacity layer. Will that be the domain of NFS/CIFS going forward given that performance/latency envelope will be taken care of at the server / application assisted flash memory layer.


Hi Paul - thanks for the comment.  I can't disclose our server-side Flash plans at this stage but if you have access to one of our NDA'd reps, they will have relevant details to share.

OTOH - Your Capacity Layer question is a bit easier.  Networked storage is an ideal configuration for that.  While block protocols will be supported, I predict file protocols will be preferred due to simplicity of configuration management and granularity of data management.  Over time, Object interfaces like CDMI will dominate due to namespace scalability as well as metadata richness. Regardless, Infinite and Immortal storage properties will be hallmarks of the Capacity Layer.

First virtualisation changed everything, now it's flash. What's next that will "change everything" yet again? Big Data? In-memory computing? Regardless, it seems as if Frame Arrays truly are becoming relics of the past!

Has anyone actually looked at the performance difference between flash and disk for application response times? I'm surprised it's taken this long for people to realize big monolithic SANs are doomed.

When will NetApp Mercury ship?

Where do FlashPools fit into the VST architecture?

Great stuff here. Interesting breakdown of the both flash's disruption of the enterprise storage market and NetApp's VST. This thread inspired a blog post over at Cache IQ, in fact. We thought we'd continue the discussion and point to our strenghs in the value as well as the performance camp.

Click here to read our blog post about the Flash Industry Volatility Index