2008-05-05 01:30 AM
We have recently moved some of our application server to Vmware ESX server. For the datastore we have used Netapp filer. I have NFs license on Netapp system, so I was wondering about what we should use to access the datastore, I mean whether we should use iSCSI or NFS. It seems majority of people uses iSCSI and even Vmware engineer suggested that, while Netapp saya that on NFS the performance is as good as iSCSI, and in some cases it is better. Also I knew that Netapp NFS access is really very stable and perfornace freidnly, so I have choosed NFS to access the datastore. The additional advantage which I have got is that I donot need any snaprestore or any other license to restore the backup. Becuase in NFS, we have all the snapshot copies accessible directly under .snpshot directory, so we have created scripts which takes snapshot every 15 nins for very critical servers, so anytime if I have an issue, I can get the last 15min snapshot copy, and make it a production copy in a mater of seconds.
Now my question is have any customer have done some real testing to prove that NFS access is atleast equivalne tot iSCSI performance if not better. Becuase in times to come, the load on our application server will increase only, so I want to make sure that my decision was correct.
2008-05-05 04:06 AM
2008-08-15 07:05 AM
You mentioned that you were running VMware and using Netapp & NFS for storage. I've been doing some benchmarks in the last week or so trying to tie down how we'll manage an migration from Microsoft Virtual Server to VMware ESXi.
I've been unable to replicate the stated results that NFS performs at least as well as iSCSI. I'm using a simple hard disk tuning test (http://www.hdtune.com/) and using small block sizes (4k) I can get around 40Mbps using iSCSI, using NFS on the same filer I can only get 10% of that throughput. The situation is even worse when using larger block sizes (512k) - here iSCSI can hit 200-300 Mbps and NFS struggles to reach 14Mbps.
I was wondering if you had any suggestions as to where I might look - I've searched around on the net and adjusted a "hidden" NFS option for tcp receive window size (which had no effect) but other than that all I can see from netapp is lots of white papers saying look it works.....
2008-08-15 07:29 AM
You may want to check out a few of the following links:
Also, as an added suggestion, get in touch with your NetApp SE and see if they have any particular suggestions in order to make your testing a success.
I've seen at times when a few tweaks on either the storage, host or even switch side can make the difference of 10fold or better results.
Not knowing your switch configuration, it is difficult to validate whether that could also be playing a role in this, so I would advise trying to ensure that everything there looks solid.
Look forward to hearing of your results!
2008-08-15 09:45 AM
Performance numbers in a properly configured system should be within 10% of each other. We're running NetApp 3140 with VMWare and performance is better on NFS than iSCSI connected disks. Our benchmarks were using SQLIO and JetStress. Now there must be a serious misconfiguration if you are seeing numbers that poor. I would begin looking at the physical switch configuration. Are you connecting at 100mb instead of 1000mb? Is one side half duplex and the other full duplex. Are you getting any CRC errors on the switch port?
2008-09-10 06:30 PM
We're using NFS for our ESX datastores and it's fantastic - so much easier than iSCSI, and faster as well.
In the early sages I rang some basic performance tests to compare several different flavours of iSCSI (MS initiator in VM, ESX VMFS, ESX RDM) and NFS. These were performed using IOMeter with some of the test configurations supplied in the SAN performance thread on the VMWare forums.
The results showed that one of the flavours of iSCSI (ESX RDM I think) was marginally faster than NFS in sequential read, but for everything else - random read, random write, mixed, etc - NFS was in all cases faster tham iSCSI, sometimes by as much as 20%.
All this was performed from a Windows 2008 x64 VM running on an un-loaded dual quad-core Intel box against a FAS2050C.
Hope that helps.
2008-12-15 06:32 AM
We are just starting a migration of VMFS-datastores over FCP on a competitors storage system to a NFS datastore on NetApp.
Now there are concerns about performance. So we have to make tests before and after.
Could you provide me with some more links or configs of IOMeter what and how exactly you did your tests?
Thank you very much for your support.
2008-12-16 06:58 AM
There is a second bottom to that.
Any LUN with VMFS data store on it can suffer from one problem - LUN-wide SCSI reservations. They occur when e.g. a VM is started or stopped. What it means is at that particular time no ESX hosts can access this LUN. With a handful of ESX hosts & fairly static environment (e.g. every VM always running) it not necessarily impacts the performance significantly, yet in some other scenarios it can pose a problem.
And guess what? NFS doesn't do any LUN-wide SCSI locks, as there is no LUN! (all locks are done on a file level)
The issue described above cannot be measured by simple disk throughput test - only something almost equal to a real environment with all its characterists (i.e. number of ESX hosts & VMs, usage patterns, etc.) can deliver some meaningful results.