Data Backup and Recovery
Data Backup and Recovery
I just came across the attached "performance study" published by VMware (pulled it out of a thread on the toasters list). I am appalled at the limited scope of the test, I can't believe VMware would release such a useless piece of garbage and pass it off as a legitimate study of the protocol characteristics in a VMware environment. How many customer environments consist of a single physical host and 2 guest VM's?
I'm sure none of us believe that NFS is the appropriate choice in all cases, but I have a number of customers who have migrated their large guest OS datastores from iSCSI or FC to NFS and have experienced noticeable performance gains. So far I don't have any customers that have complained about the datastores running over NFS, I do however have a number of customers that have migrated from iSCSI to NFS due to severe salability constraints within the VI software initiator on their VMFS datastores with many vmdk's.
I'd like to solicit feedback on experiences with the varying protocols, from a performance perspective, that have been observed in real-world environments.....
Our experiences are very clear:
+ by adding more Virtual Machines in one VMFS, than the performance go dramatically down (independing if you use FCP or iSCSI)
+ by adding more Virtual Machines on one volume by nfs, the performance benefit for become bigger when you this compared with the VMFS situation
In our situation (10 VM in one VMFS with iSCSI), the performance gain when you go to nfs is more than 30%.
Those are fantastic results. Do you also use FCP or iSCSI for application type data (ie. a single guest accessing the Datastore/RDM)?
Today we use for the root volumes of the VM's, iSCSI on the ESX level (even the software initiator of VMWare). For the application data, we use also iSCSI but the microsoft initiator in the VM in combination of snapdrive (for example: or SQL servers).
Our play/test environment is migrated to NFS. We will do this also do for the production environment. In the first step, we'll replace the datastores with the root volumes to nfs (iSCSI on level of the ESX kernel). The application data (SQL) will stay in the VM with the microsoft iSCSI initiator.
Yea, the vmware study was way to simplistic, however I believe that they came to the correct conclusion...
We have over 35 ESX hosts, and 950VMs across 2 fas3070's and one fas3050 and performance over 18 months has never come up, not even once. And we expect be able to put 2-3000 VMs on this system before hitting a storage limit.
I have an iometer config that runs 25 tests on a VM with disks connected to the ESX host via FC, iSCSI and NFS datastores. The test compares one storage solution to another for raw performance of one VM. I was suprised by the swing in results that I got...
Below is an example output... It compares one vendor to another and the results show where the storage device is faster or slower when compared to the other.
For the example below, the other vendor is 2x faster for 100 %reads, however the fas3070 is up to 5x faster for other workloads.
The #1 netap feature is that it can run FC, iSCSI and NFS from the same box. So protocol is less of an issue, since you can do all three protocols at the same time from the same device.
Dan, What mix of protocols are you using?
My main problem with the VMware study is that it did not test multiple/many VM's running across multiple physical hosts using the same datastore. I'd love to see an expanded test, similar to the VMware test that shows a more real world scenario. Any NTAP folks want to comment on when we will see such a study from NTAP and VMware?
The biggest advantage I see to leveraging NFS on larger consolidated OS datastores is being able to consolidate the many datastores required to maintain a low VM/Datastore ratio that is required to ensure high performance in larger VM environments, in addition to the advantages deduplication has in an NFS environment.
From a platform perspective, I'm a big fan of being able to leverage the right protocol for the job.
Thanks for the data!
How do you guys balance your vm datastores on NFS? eg. lots of volumes balanced across multiple nics?
Regards,
Trevor