Community

Flash: Acknowledge the Present to be Successful in the Future

By Jay Kidd, Chief Technology Officer

 

One of my favorite quips in IT is that “God created the world in seven days because He didn’t have to worry about the installed base.” This thought is helpful to understand the difference between disruptive and transformative technologies.  

 

There is no question that flash is a disruptive technology in enterprise storage. The read latency performance specs on SSDs versus HDDs are compellingly faster. Power consumption per gigabyte is better. And so on. Flash is showing up in every enterprise storage architecture and is driving the creation of several new ones. And there are great proof points about how it lowers the cost/desktop for VDI or speeds up SQL databases.  

 

As with most disruptive technologies, there is also a lot of hyperbolic nonsense out there. Flash replacing disks across the board? The all-flash data center? Not until the cost of flash comes way down (the most optimistic projections have TLC NAND still costing 8 to 10 times more than the lowest-cost SATA disks by 2020). But won’t deduplication and compression technologies close the gap? No, because these technologies have been applied to disks for years, and work just as well there. 

 

Disruption is based on speeds and feeds of new technologies. Transformation is based on how that technology can be applied to do meaningful work.

 

To make the journey from disruptive to transformative, a new technology must evolve through three phases of buyers:

  1. Those who buy to see whether it works
  2. Those who buy it to see whether it can be put to useful work in specific places
  3. Those who buy it to apply to a broad range of work they need to get done

This follows Geoff Moore’s Chasm model, but there are important factors in moving from each level to the next. The greatest challenge in becoming truly transformative, and being applied to a broad range of work, is being able to gracefully complement the existing solutions in a way that is economically viable, operationally simple, and easy to integrate into what exists. If the cost of adoption is a complete change in operations workflow or a complete inability to fit into existing management frameworks, then a product will rarely make it out of the niche category. 

 

The irony is that for disruptive technologies to be adopted broadly and quickly, they really need to be nondisruptive to operations and budgets.

Flash is showing up in the enterprise storage array market in two main forms: purpose-built all-flash arrays (AFAs) and hybrid storage systems in which flash is combined with HDDs. The AFAs have higher IOPS and better latency than the hybrid systems do, but they require adoption of yet another storage product for what is currently a pretty small set of use cases. Enterprise storage systems that are well established in the operational workflow of companies are evolving to fully utilize the performance of SSDs both in caching and tiering forms and for primary storage. The line between them is blurring.

 

There will always be products to serve the use cases in which performance matters above all else, but we are entering the phase of the market where you can get 90% of the performance of a specialized architecture and all the operational compatibility and data management features that you know and love.  

That is transformative. 

 

Stay tuned for my next post, where I share my thoughts on NetApp’s recent flash announcements.