Network and Storage Protocols

VDI Storage Protocol

timsmith16
6,902 Views

What is the best storage protocol for VDI deployments, specifically with VMware?   Fibre Channel, iSCSI or NFS.   How does FCoE fit into VDI once organizations begin to deploy it?



7 REPLIES 7

treyl
6,903 Views

NetApp’s position on storage protocols are that we support all of them through our native multi-protocol architecture.  Native means that within our Storage Processors or Controllers (as we typically refer to them) the protocols we support are enabled through the native operating system (Data OnTap) and no additional hardware is needed to support those protocols, beyond the interfaces required to connect to the medium for the specific protocol.

NetApp published a technical report (TR-3697) on the performance differences between each of the protocols with VMware ESX 3.5.

http://media.netapp.com/documents/tr-3697.pdf                       

TR-3697 was focused on performance and showed that under an average workload there is only a 9% difference between each of the protocols.   However, performance is often only one piece of the equation when choosing a protocol.  I find protocol experience, protocol management requirements and organizational comfort are other things that factor into the choice of a protocol.

To address your question directly.   All of the protocols have a role in the Virtual Desktop Infrastructure, the protocol you use depends on the context of the requirements for specific environment.

The following are a few customer examples, their circumstances and the protocols they chose.

1.) Small customer deploying 50 virtual desktops to support office and administrative staff at home and in the office.

Protocol Choice:  This customer chose iSCSI because they didn’t have an existing SAN and were sensitive to size of the investment in the project.  The customer had just deployed a new network to support IP Telephony which resulted in their network being modernized.   The best option for them was to carve out a VLAN in the new switching infrastructure for iSCSI connections.   Our FAS 2050 with iSCSI bundled provided them a low cost storage infrastructure to support the deployment.

2.) Medium customer deploying hundreds of virtual desktops of two types.  The first type of virtual desktop was for office staff who typically had a desktop workstation and used the desktop for office related document publishing and email.  The second type was for engineering staff who worked with large modeling applications hosted on the organizations SAN.

Protocol Choice: The customer chose Fibre Channel for the engineering virtual desktops as the modeling application was hosted on the SAN.  They felt carving out LUNs to host each engineers modeling repository gave the user high performance direct access to the files they used on a daily basis.  The virtual desktop infrastructure allowed the engineer to work from remote or in the office with high performance access to their repository. The office staff virtual desktops were hosted on NFS volumes as there were a large quantity of these virtual desktops.  Hosting the office staff virtual desktops would have required dedicating a few LUNs (~4-6) for the quantity of virtual desktops being deployed.   NFS has proven to scale to larger numbers of virtual machines per datastore without impacting virtual machine performance.  Their particular requirement was satisfied by deploying a 1TB NFS volume and enabling NetApp’s ASIS de-duplication for that volume.

3.) Large customer deploying thousands of virtual desktops to support numerous call center agents.  Each desktop was standardized for a given project a agent was assigned to.

Protocol Choice:  This customer ultimately chose NFS because they had no Fibre Channel infrastructure in place and NFS proved to provide the right mix of scalability, performance and manageability.  iSCSI was an initial consideration.  Primary importance was the need to support high density VM numbers on each ESX host. A storage protocol needed to be able to support a large quantity of storage with minimal impact to the host ESX server.  Additionally, they were already planning to centralize the management of thousands of virtual desktops and the infrastructure supporting them needed to be easy to manage. iSCSI proved to require higher amounts of CPU from the host ESX server (Similar results are detailed in TR-3697 referenced above) and it was expected they would need to deploy hundreds of LUNs for the VMs.   Fibre Channel proved to perform well but they didn't have an existing fabric and they noticed performance dropping as larger quantities of virtual machines were placed in a datastore.  The expectation was that they would need to support a large number of LUNs for Fibre Channel to scale.   NFS proved to support very high numbers of virtual machines in each datastore (~500 or more).  NFS had a far less impact than iSCSI on CPU cycles of the ESX host thus the mixture of high density, solid performance and small quantity of storage volumes to manage meant that NFS most closely met their storage protocol requirements.

NFS, iSCSI & FC Protocol Summary:  There is not a hard and fast rule about one protocol being better for VDI over another.  There are circumstances within an organization which could make one protocol better suited for the VDI workload being deployed.  The great thing about NetApp is that we support each of these protocols natively and we enable each of our customers the flexibility to make choices that are most beneficial for supporting the VDI workload.

FCoE - What it means for VDI?

The second part of your question was in regards to what role FCoE will play in VDI.  FCoE is literally Fibre Channel on Ethernet.  Many people assume that when the words Ethernet are used that this somehow means IP is involved.   FCoE does not involve IP.   Fibre Channel over Ethernet is taking the traditional zoning and forwarding logic of Fibre Channel and overlaying it onto the Ethernet wire.  To accommodate this and provide a high level of service, a new type of Ethernet is deployed.

DCE (Data Center Ethernet) or Converged Enhanced Ethernet (CEE).  The names DCE and CEE are typically tied to a manufacture.

DCE: Cisco

CEE: Brocade and IBM.

This new type of Ethernet provides a means to transport data with a lossless service, among other features.  This lossless service is required to transport Fibre Channel.  FCoE will initially follow the same use case guidelines as Fibre Channel.  So if you have or were planning to use Fibre Channel for a certain VDI workload, you would probably use FCoE for that workload as well.  As the protocol evolves and we gain more experience with it I believe that use cases will evolve as well.

Hope this helps,

Trey

keitha
6,903 Views

Trey summed that up beautifully. One other item I would mention is a product that NetApp is about to release called RCU 2.0 (Rapid Cloning Utility). This tool is a direct plugin for Virtual Center and allows the VC Admin to clone out massive amounts of virtual machines that are fully customized and pre Deduped! In the upcoming 2.0 release only NFS will be supported so that may sway a few customers toward NFS for their VDI deployments. I'm working on a full blog about RCU 2.0 which will be posted on the NetApp Virtualization Team Blog here shortly.  While you are there be sure to read though the excellent posts Abhinav and Chris made on VDI best practices. Top notch stuff!

The actual deployments I have worked have been a mix of FC and NFS (I haven't done one on ISCSI yet). The customers that chose FC did it for a couple of reasons. In some cases they did it to minimize risk. A VDI deployment does introduce a lot of new technology into most customers environments, the connection broker, potentially new end point devices and many new processes. For some customers, they knew FC and didn't know NFS so by selecting FC they had one less new technology to worry about. For another client, they simply just didn't feel that their network was up to the task of supporting a VDI environment. One of my current clients is still trying to make the decision, is leaning towards FC for the reasons mentioned above but is holding the option of NFS open as they want to see the RCU 2.0 utility.

As Trey mentioned, it is not so much the release of FCOE that will change this but the ability to run multiple protocols across the same wire that will transform how we design and deploy VDI environments. Since both FCOE and NFS would run over the same 10G connections performance and infrastucture sort of fall out of the picture. What you are left with is choosing what works best with the service you are trying to provide. I can see FCOE for some application data and applications that need block level access and NFS for the applications and services that don't. Ultimately the decision may come down to what protocol a customer knows and trusts but I suspect that as VDI deployments grow in numbers and size that NFS may become a more popular choice.

Keith

s5049431A
6,903 Views

i am very interested in the rcu been looking over this for a little while now. i have seen some things about the upcoming rcu 2.0 and it seems great. now i just want to know if there soon will be a first version of the rcu 2.0 available or something like that. i am trying to set up a vdi with vmware and esx in the next 3 months, and it would be fantastic if i could use this software in my setup. does somebody have an idea when it will be available?

greetz

gracely
6,903 Views

Look for an announcement about RCU in the first week of April. 

keitha
6,903 Views

I completed that blog so look here for a sneak peek and the youtube video showing the tool.

Keith

angelage
6,903 Views

RCU 2.0 will be officially launched on April 7th.  It can be downloaded on the NOW product site as early as April 2nd.  It will require ONTAP 7.3.1P2.

Of all the VDI deployments and pilots that I track (about ~60 cases or so), approximately 75% are NFS.  The majority of the rest is FC.

amiller_1
6,902 Views

So....complete agreement about presenting to customers that all protocols are possible (if strong preference or comfort levels, no reason not to respect that).

However, so far I'm mostly finding that it comes down to iSCSI or NFS (I'm primarily in the SMB space personally). When we start talking benefits/features/etc., most people want to do NFS (both for VDI and/or for VMware in general). But....the breaking point usually comes around cost for NFS unfortunately. If NFS & iSCSI were priced identically or even close, I think we'd see 90%+ on NFS -- as it is it's something of a mix.

For the NFS side, see this post for general NFS benefits....

http://viroptics.pancamo.com/2007/11/why-vmware-over-netapp-nfs.html

and this one for details on the soon to be released RDU 2.0 -- very cool stuff but only for NFS.

http://blogs.netapp.com/virtualization/2009/03/sneak-peek-netapp-rcu-20.html

Edit: just realized the link above was already posted....my mistake.

Public