NetApp’s position on storage protocols are that we support all of them through our native multi-protocol architecture. Native means that within our Storage Processors or Controllers (as we typically refer to them) the protocols we support are enabled through the native operating system (Data OnTap) and no additional hardware is needed to support those protocols, beyond the interfaces required to connect to the medium for the specific protocol.
NetApp published a technical report (TR-3697) on the performance differences between each of the protocols with VMware ESX 3.5.
http://media.netapp.com/documents/tr-3697.pdf
TR-3697 was focused on performance and showed that under an average workload there is only a 9% difference between each of the protocols. However, performance is often only one piece of the equation when choosing a protocol. I find protocol experience, protocol management requirements and organizational comfort are other things that factor into the choice of a protocol.
To address your question directly. All of the protocols have a role in the Virtual Desktop Infrastructure, the protocol you use depends on the context of the requirements for specific environment.
The following are a few customer examples, their circumstances and the protocols they chose.
1.) Small customer deploying 50 virtual desktops to support office and administrative staff at home and in the office.
Protocol Choice: This customer chose iSCSI because they didn’t have an existing SAN and were sensitive to size of the investment in the project. The customer had just deployed a new network to support IP Telephony which resulted in their network being modernized. The best option for them was to carve out a VLAN in the new switching infrastructure for iSCSI connections. Our FAS 2050 with iSCSI bundled provided them a low cost storage infrastructure to support the deployment.
2.) Medium customer deploying hundreds of virtual desktops of two types. The first type of virtual desktop was for office staff who typically had a desktop workstation and used the desktop for office related document publishing and email. The second type was for engineering staff who worked with large modeling applications hosted on the organizations SAN.
Protocol Choice: The customer chose Fibre Channel for the engineering virtual desktops as the modeling application was hosted on the SAN. They felt carving out LUNs to host each engineers modeling repository gave the user high performance direct access to the files they used on a daily basis. The virtual desktop infrastructure allowed the engineer to work from remote or in the office with high performance access to their repository. The office staff virtual desktops were hosted on NFS volumes as there were a large quantity of these virtual desktops. Hosting the office staff virtual desktops would have required dedicating a few LUNs (~4-6) for the quantity of virtual desktops being deployed. NFS has proven to scale to larger numbers of virtual machines per datastore without impacting virtual machine performance. Their particular requirement was satisfied by deploying a 1TB NFS volume and enabling NetApp’s ASIS de-duplication for that volume.
3.) Large customer deploying thousands of virtual desktops to support numerous call center agents. Each desktop was standardized for a given project a agent was assigned to.
Protocol Choice: This customer ultimately chose NFS because they had no Fibre Channel infrastructure in place and NFS proved to provide the right mix of scalability, performance and manageability. iSCSI was an initial consideration. Primary importance was the need to support high density VM numbers on each ESX host. A storage protocol needed to be able to support a large quantity of storage with minimal impact to the host ESX server. Additionally, they were already planning to centralize the management of thousands of virtual desktops and the infrastructure supporting them needed to be easy to manage. iSCSI proved to require higher amounts of CPU from the host ESX server (Similar results are detailed in TR-3697 referenced above) and it was expected they would need to deploy hundreds of LUNs for the VMs. Fibre Channel proved to perform well but they didn't have an existing fabric and they noticed performance dropping as larger quantities of virtual machines were placed in a datastore. The expectation was that they would need to support a large number of LUNs for Fibre Channel to scale. NFS proved to support very high numbers of virtual machines in each datastore (~500 or more). NFS had a far less impact than iSCSI on CPU cycles of the ESX host thus the mixture of high density, solid performance and small quantity of storage volumes to manage meant that NFS most closely met their storage protocol requirements.
NFS, iSCSI & FC Protocol Summary: There is not a hard and fast rule about one protocol being better for VDI over another. There are circumstances within an organization which could make one protocol better suited for the VDI workload being deployed. The great thing about NetApp is that we support each of these protocols natively and we enable each of our customers the flexibility to make choices that are most beneficial for supporting the VDI workload.
FCoE - What it means for VDI?
The second part of your question was in regards to what role FCoE will play in VDI. FCoE is literally Fibre Channel on Ethernet. Many people assume that when the words Ethernet are used that this somehow means IP is involved. FCoE does not involve IP. Fibre Channel over Ethernet is taking the traditional zoning and forwarding logic of Fibre Channel and overlaying it onto the Ethernet wire. To accommodate this and provide a high level of service, a new type of Ethernet is deployed.
DCE (Data Center Ethernet) or Converged Enhanced Ethernet (CEE). The names DCE and CEE are typically tied to a manufacture.
DCE: Cisco
CEE: Brocade and IBM.
This new type of Ethernet provides a means to transport data with a lossless service, among other features. This lossless service is required to transport Fibre Channel. FCoE will initially follow the same use case guidelines as Fibre Channel. So if you have or were planning to use Fibre Channel for a certain VDI workload, you would probably use FCoE for that workload as well. As the protocol evolves and we gain more experience with it I believe that use cases will evolve as well.
Hope this helps,
Trey