The recent trend of installing Flash memory at the server level has been a really hot topic lately but the big question is really about implementation. There’s no doubt that it makes sense to move Flash technology closer to your latency sensitive applications but there’s also the question of how to do so without messing up the data management paradigm that your shared infrastructure depends on. If all you do is plug Flash into your servers, you’ve got really fast Direct Attached Storage (DAS). So, the question remains-how to get the performance boost you want without giving up the benefits of shared storage (sharing across servers, high availability, data protection, disaster recovery, etc.)? Not surprisingly, we find the answer to this question in the context of a coherent end-to-end approach that we have been implementing since 2009-The NetApp Virtual Storage Tier, or as many of us like to call it, VST.
With the introduction of our Server Caching program, we are extending VST to the enterprise server level. So, you now have a complementary set of intelligent caching technologies that coexist across a wide mix of use cases and workloads. With Flash Cache and Flash Pool as key performance and efficiency enablers at the storage array level, Server Cache provides the added flexibility to speed up specific applications without compromise to the reliability, availability and manageability that you expect.
The beauty of this is that it is an AND not an OR approach. With storage cache providing the base VST value across all of your shared storage. The result of adding Server Cache is lower latency, more IOPS and optimized cost.
Our Server Cache announcement provides some of the details about our open and partner-centric approach to this important new dimension of VST. It includes new NetApp software, Flash Accel, and the industry’s most open partner alliance for Server Caching.
With this announcement, we’ve taken VST to a new level and given new you more ways to deploy Flash in your shared IT infrastructure.