Performance Characteristics of Datastores Accessed via FCP and NFS on VI3.5

by Frequent Contributor on ‎2008-11-03 06:31 PM

This paper documents the performance characteristics of Virtual Infrastructure 3.5 datastores accessed

via FCP and NFS protocols. Testing methodology, configuration, and results will be presented for

connectivity via 4Gbps FC, 1GbE, and 10GbE. The results in this paper show that the FCP and NFS

protocols are both viable options for high performance connectivity to shared storage in a Virtual

Infrastructure 3.5 environment.

Comments

Realy cool, but you place only 2 VM's in one VMFS. It becomes more interesting when you place 10 or 15 VM's in one VMFS (the real world and like you do in the nfs store).

You will see then some difference, in the advantage of nfs.

Greetings,

Reinoud

Frequent Contributor

Reinoud,

That's a great observation, I didn't even notice that the FCP datastore only had 2 VMs per.

I'm going to check with Mike Arndt (the engineer who ran the benchmark) to see why this was done? and if he had any test results with running "10 VMs on NFS" vs "10 VMs on FCP"?

Are you running NFS in your environment? Have you done any benchmarks?

Cheers, Tony

Greetings,

Are the bandwidth / overhead of the three protocols reasonably comparable?  For example, I have ESX hosts that are using dual-pathed 2Gb FCP ports, but according to the FC switch, no single port reaches more than 40 Mbps peak utilization.

If the bandwidth numbers are comparable, assuming we put the proper infrastructure in place with regards to reliability, from a throughput perspective, we could in theory consolidate these ESX servers to just a handful of 1 GigE ports without incurring performance penalty?

Thanks!

Frequent Contributor

Sorry, but I just noticed this document out here and the questions posted.  There are not only 2 VM's in each VMFS datastore.  There are 2 VMs *per server*, on each of the 5 servers, per VMFS datastore.  This means there are 10 VMs per VMFS datastore in total, as each datastore is connected to all 5 servers in the ESX cluster.  Hopefully this helps clear things up if people are still confused on this point!

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.