Earlier this year we released Flash Accel 1.1 which gave us the ability to extend Data ONTAP capabilities to a server by creating a caching space to complement the NetApp Virtual Storage Tier (VST). This allowed us to use flash devices more effectively and eliminates potential problems with data protection without isolating silos of data. In addition, Flash Accel provides:
What’s new with Flash Accel 1.2?
Now let me brief you on how Flash Accel works and some of the technical details.
To use Flash Accel, a VIB needs to be installed on the ESXi host. The flash device can then be carved up into multiple cache spaces, which can then be presented to the windows Guest OS (Linux support is coming in the future). You can have only one cache space per VM but you can enable caching for up to 32 VM’s. The Guest OS must have an agent installed to leverage the flash based caching space. All of the Flash Accel configuration work is done via a new Flash Accel Management console. It is used for installation, provisioning, and assigning cache to VMs. This console is available as a plug-in to VSC 4.2, which runs in VMware vCenter. If you can’t decide which management console to use, here is a great knowledgebase article which describes the benefits of VSC and FMC for Flash Accel enabled VM’s.
Flash Accel consists of three components:
Flash Accel Architecture and Internals
Flash Accel and vMotion
vMotion is fully supported with Flash Accel and this support requires that cache space to a VM is reserved on all applicable hosts in a datacenter. There are some migration policies which need to be set before you enable vMotion for Flash Accel enabled VM’s and this can be set from the FMC console--Console Settings--Migration. The migration can be based on Cluster, Data Center and Host. When choosing the default migration scope, remember that cache space must be reserved on each host that a VM may migrate to; choosing Cluster instead of Datacenter(otherwise it will result in less overall flash consumed per VM).
Flash Accel Performance
Flash Accel, when tested with an OLTP workload, was able to offload 80% of the I/O’s from the storage array to the server. And when deployed with Flash Cache, we were able to reduce the storage disk utilization by 50%. This also resulted in 60% reduction in array CPU utilization compared to using Flash Cache alone.
Flash Accel Integration
Now that you understand the internals of Flash Accel and how to deploy it, you may be wondering how to identify the VM’s which are good candidates for Flash Accel? The OnCommand Insight team has built a plugin for Flash Accel which provides visibility from the VM through the ESX and into the storage. Insight is monitoring the configuration and the performance for all of the elements in the infrastructure and gives you planning capability. You can find more details about it here
With the release of VSC 4.2 one can manage Flash Accel enabled VM’s within the same console;all you need is the updated plugin which you can download and install from our support site.Flash Accel is a released product and it is a free download for Data ONTAP customers from the NetApp support site. The demo for Flash Accel 1.2 is available on NetApp communities.