VMware Solutions Discussions

Using NFS over iSCSI for VMware access

rajdeepsengupta
18,262 Views

We have recently moved some of our application server to Vmware ESX server. For the datastore we have used Netapp filer. I have NFs license on Netapp system, so I was wondering about what we should use to access the datastore, I mean whether we should use iSCSI or NFS. It seems majority of people uses iSCSI and even Vmware engineer suggested that, while Netapp saya that on NFS the performance is as good as iSCSI, and in some cases it is better. Also I knew that Netapp NFS access is really very stable and perfornace freidnly, so I have choosed NFS to access the datastore. The additional advantage which I have got is that I donot need any snaprestore or any other license to restore the backup. Becuase in NFS, we have all the snapshot copies accessible directly under .snpshot directory, so we have created scripts which takes snapshot every 15 nins for very critical servers, so anytime if I have an issue, I can get the last 15min snapshot copy, and make it a production copy in a mater of seconds.

Now my question is have any customer have done some real testing to prove that NFS access is atleast equivalne tot iSCSI performance if not better. Becuase in times to come, the load on our application server will increase only, so I want to make sure that my decision was correct.

22 REPLIES 22

karl_pottie
17,103 Views

We have done testing and are using NFS in a production VMWare environment.

See my presentation for details :

Unfortunately, the VMWare EULA doesn't allow me to publish the actual benchmark results, but you can be sure that NFS performance is at least as good as iSCSI.

danpancamo
17,462 Views

We have over 1000 VMs on 35 ESX host on 2 fas3070's.... All over NFS.. Running production since late 2006

charlesgillanders
17,462 Views

Hi,

You mentioned that you were running VMware and using Netapp & NFS for storage. I've been doing some benchmarks in the last week or so trying to tie down how we'll manage an migration from Microsoft Virtual Server to VMware ESXi.

I've been unable to replicate the stated results that NFS performs at least as well as iSCSI. I'm using a simple hard disk tuning test (http://www.hdtune.com/) and using small block sizes (4k) I can get around 40Mbps using iSCSI, using NFS on the same filer I can only get 10% of that throughput. The situation is even worse when using larger block sizes (512k) - here iSCSI can hit 200-300 Mbps and NFS struggles to reach 14Mbps.

I was wondering if you had any suggestions as to where I might look - I've searched around on the net and adjusted a "hidden" NFS option for tcp receive window size (which had no effect) but other than that all I can see from netapp is lots of white papers saying look it works.....

Thanks.

Charles

kusek
17,103 Views

Charles,

You may want to check out a few of the following links:

Performance Tuning Best Practices for ESX Server 3

How to improve irregular NFS performance between ESX Server and NetApp storage controller

NetApp and VMware Virtual Infrastructure 3 Storage Best Practices

Also, as an added suggestion, get in touch with your NetApp SE and see if they have any particular suggestions in order to make your testing a success.

I've seen at times when a few tweaks on either the storage, host or even switch side can make the difference of 10fold or better results.

Not knowing your switch configuration, it is difficult to validate whether that could also be playing a role in this, so I would advise trying to ensure that everything there looks solid.

Look forward to hearing of your results!

Christopher

philiparnason
17,462 Views

Performance numbers in a properly configured system should be within 10% of each other. We're running NetApp 3140 with VMWare and performance is better on NFS than iSCSI connected disks. Our benchmarks were using SQLIO and JetStress. Now there must be a serious misconfiguration if you are seeing numbers that poor. I would begin looking at the physical switch configuration. Are you connecting at 100mb instead of 1000mb? Is one side half duplex and the other full duplex. Are you getting any CRC errors on the switch port?

Philip Arnason

loliverone
13,582 Views

Could you share your filer configuration with me.  How many volumes do you use, sizes, no# of aggregates/rgs and disks within aggrs.

strattonfinance
17,460 Views

We're using NFS for our ESX datastores and it's fantastic - so much easier than iSCSI, and faster as well.

In the early sages I rang some basic performance tests to compare several different flavours of iSCSI (MS initiator in VM, ESX VMFS, ESX RDM) and NFS. These were performed using IOMeter with some of the test configurations supplied in the SAN performance thread on the VMWare forums.

The results showed that one of the flavours of iSCSI (ESX RDM I think) was marginally faster than NFS in sequential read, but for everything else - random read, random write, mixed, etc - NFS was in all cases faster tham iSCSI, sometimes by as much as 20%.

All this was performed from a Windows 2008 x64 VM running on an un-loaded dual quad-core Intel box against a FAS2050C.

Hope that helps.

mheimberg
17,460 Views

Hi Mathew

We are just starting a migration of VMFS-datastores over FCP on a competitors storage system to a NFS datastore on NetApp.

Now there are concerns about performance. So we have to make tests before and after.

Could you provide me with some more links or configs of IOMeter what and how exactly you did your tests?

Thank you very much for your support.

regards

markus

radek_kubka
17,460 Views

Markus,

There is a second bottom to that.

Any LUN with VMFS data store on it can suffer from one problem - LUN-wide SCSI reservations. They occur when e.g. a VM is started or stopped. What it means is at that particular time no ESX hosts can access this LUN. With a handful of ESX hosts & fairly static environment (e.g. every VM always running) it not necessarily impacts the performance significantly, yet in some other scenarios it can pose a problem.

And guess what? NFS doesn't do any LUN-wide SCSI locks, as there is no LUN! (all locks are done on a file level)

The issue described above cannot be measured by simple disk throughput test - only something almost equal to a real environment with all its characterists (i.e. number of ESX hosts & VMs, usage patterns, etc.) can deliver some meaningful results.

Regards,

Radek

mheimberg
17,460 Views

so I am really glad that we move to NFS now ....

Markus

igeeksystems
14,978 Views

Think about cost from NetApp perspective, why iSCSI is free but NFS is licensed?  That's something to think about performance and administrative standpoints, both should be differentiated between 10-15% in performance ratio.  So if you implemented low budget storage solution why not use iSCSI.  Unfortunately, we're using FAS3070s series for high end enterprise applications and it rocks with NFS and uses SnapMirror for DR replication. Thanks for the benchmark details and good guide on simulator as well.

nicholas4704
14,978 Views

Agree. For low end systems iSCSI is better (cheaper)

Acctually both approaches are good.

Unfortunately NFS license costs money and for Tier 4 or Tier 5 system it is pretty expensive (especially if you use cluster and you ussually do)


There are several cons and pros for everything.

Some NFS provisioning benefits could be get by LUN provisioning and dedup in iSCSI world. Easy restores from snapshots (NFS) vs lun cloning (iSCSI)

iSCSI does have multipathing, NFS doesn't.

You store data in vmdk on NFS, so you'll need VMWare to get your data.

IMHO VMWare company likes iSCSI better. And I heard that iSCSI is better from VM guy.

Netapp NFS stack is really good so I think if you have a large number of VMs you can go for it.

P.S.

VM snapshots did not work for NFS. But I think it was fixed recently in VMware

igeeksystems
14,980 Views

You're absolutely right, it has some pros and cons on both protocols and it really depends how large your environment and what types of features you wanted to be part of and actually reading many blogs regarding iSCSI is not a bad solution at all especially using NetApp gear as we all know.  Matter of fact, I have my dev/test VMware cluster using iSCSI and it runs great and save some money.

mheimberg
14,978 Views

In some points I can not follow:

>Some NFS provisioning benefits could be get by LUN provisioning and dedup in iSCSI world.

With NFS I get my space transparently and immediately back, the view from the storage is the same as from ESX, isn't it?

>Easy restores from snapshots (NFS) vs lun cloning (iSCSI)

The point is, that I can enter the snapshot-directory directly and copy out the needed files, eg. to compare two different *.vmx. With iSCSI one must clone the LUN, map it, rescan on ESX to get the new store.... so NFS is much more admin-friendly.

>iSCSI does have multipathing, NFS doesn't.

Right, use IP-Aliases for different stores.

>You store data in vmdk on NFS, so you'll need VMWare to get your data.

Uh? And in an iSCSI-LUN-Datastore you don't?

In fact there are more ways to get to the data that is inside a vmdk when it is stored on NFS: you could mount the volume from a third machine (I am not saying everyone should do that all the time -get me right), copy the needed files away from active file system or snapshot, then there are tools to mount and interpret vmdk-files, even with an NTFS in it.

In contrast to iSCSI here is at least one more step: you need a tool to interpret VMFS before you get your vmdk.

So, my favourit is still NFS because it is so much simpler in handling and does not lack of performance in small to medium business where I have seen and applied it.

regards

Markus

amiller_1
14,980 Views

For a pretty exhaustive list of NFS & VMware benefits, see this post.

http://viroptics.pancamo.com/2007/11/why-vmware-over-netapp-nfs.html

To me the top ones are....

  • easier administration (can have bigger NFS datastores due to no locking issues and ease of FlexVol growth/shrink)
  • deduplication integration -- this is HUGE....so you can use dedup with FC or iSCSI but the whole thin provisioning/fractional reservation/etc. stuff makes it a pain...with NFS the freed up space just shows up in VMware
  • snapshot integration with or without SMVI (you get crash consistent vmdk snapshots without SMVI and even better with SMVI...it's incredibly cool to be able to roll a VM forward and backwards without VMware level snapshots and their resulting overhead)

nicholas4704
14,979 Views

Markus, Andrew, I'm neigher on iSCSI side nor NFS. I'm on Netapp side, and I understand it is not perfect.

Lets say iSCSI is more common on systems I installed.

But iSCSI is not so bad, especially on Netapp. Netapp technologies work for iSCSI.

>Some NFS provisioning benefits could be get by LUN provisioning and dedup in iSCSI world.

With NFS I get my space transparently and immediately back, the view from the storage is the same as from ESX, isn't it?

++++Agree. Storage view is not so transparent, but you get near the same result!

The worst thing with iSCSI is that LUN never decreases in size. That really bad. But does vmdk decreease if I delete some data from host OS?

>Easy restores from snapshots (NFS) vs lun cloning (iSCSI)

The point is, that I can enter the snapshot-directory directly and copy out the needed files, eg. to compare two different *.vmx. With iSCSI one must clone the LUN, map it, rescan on ESX to get the new store.... so NFS is much more admin-friendly.

++++Yes, but usually vmdk is pretty large itself (gigabytes). So that copyng takes plenty of time. LUN cloning can be done in seconds.

FlexClone license (or SnapRestore file restore maybe) solves the issue and you need just export the clone, but, again, it costs money.

>You store data in vmdk on NFS, so you'll need VMWare to get your data.

Uh? And in an iSCSI-LUN-Datastore you don't?

In fact there are more ways to get to the data that is inside a vmdk when it is stored on NFS: you could mount the volume from a third machine (I am not saying everyone should do that all the time -get me right), copy the needed files away from active file system or snapshot, then there are tools to mount and interpret vmdk-files, even with an NTFS in it.

In contrast to iSCSI here is at least one more step: you need a tool to interpret VMFS before you get your vmdk.

++++ I mean to keep data on NTFS LUNs, not VMFS, to be able mount it to any other Windows and get data running immediately.

Andrew, thank you for a good link.

But I think you still need SMVI to get VMs consistent. And SMVI just hangs with NFS. (as I mentioned it should be fixed in recent VMWare update)

Datastore size increase could be pain with iSCSI sometimes, i.e. when extent size is less than desired vmdk size. In other situations it works ok.

So the main disadvantage for is NFS is price.... NFS license is one of the most expensive and in Windows environemnts it will be used for VMware only.

It is not a problem in big bussineses and with large number of VMs I would go for NFS. With low end and mostly Windows hosts I'll go for iSCSI.

mheimberg
13,803 Views

Hi Nikolajs

But iSCSI is not so bad, especially on Netapp. Netapp technologies work for iSCSI.

of course it does, for Netapp is one of the inventors

I mean to keep data on NTFS LUNs, not VMFS, to be able mount it to any other Windows and get data running immediately.

I still don't get the point, but I am sure you are doing a good job

So the main disadvantage for is NFS is price

this is for sure - sometimes I don't understand the marketing/financing guys at Netapp....

Markus

nicholas4704
13,803 Views

I mean to keep data on NTFS LUNs, not VMFS, to be able mount it to any other Windows and get data running immediately.

I still don't get the point, but I am sure you are doing a good job

Hi Markus!

How do you supply storage for application data to your VMs?

I mean where is, for example, your MS SQL database? Is it on another vmdk on NFS and supplied as virtual disk to host OS?


Your opinion is important to me as long my VMware+NFS experience is not so big. What is the best practice for data disks?

Actually we can use mixed environment. NFS for host OSes and iSCSI RDMs for data.

I ussually use RDMs or host OS initiators to store data, thus I have NTFS (not VMFS) LUNs and I can access data from anywhere if I needd. For example reconfigure my laptop as SQL server and connected database via iSCSI.

Nikolajs

mheimberg
13,804 Views

Hi Nikolajs

How do you supply storage for application data to your VMs?

First  of all a little background info: we supply NetApp and VMWare ESX to small and medium business, means some hundreds of users, mailboxes, SQL databases just a handfull with some dozens of GB - so not the very big stuff.

In those environments we made very good experiences with this setup:

- ESX datastores connected by NFS for the sake of simplicity

- virtualized SQL or Exchange servers use Microsofts software iSCSI initiator and connect through the vSwitch to dedicated LUNs on Netapp

- we avoid the use of RDM, again because of simpler manageability and greater flexibility

So we use the ESX datastore only to store the "system disk", everything else is on dedicated volumes and LUNs.

At a first sight this may sound a bit weired, but it has some advantages:

- once you got the principle it is very simple to build and manage

- use of SnapManagers (ok: with RDM also possible)

- tansparent use of all the components: a volume with a SQL-LUN is attached to SQL server, no other components in between like a "mapping disk" for RDM

- the LUNs might be easily connected to another server when needed or - with Flexclone - might be used for something else (migration, test, develop, etc)

Again: this is our best practice, established in small to midsize (btw: I am Swiss, and in our small country a company with 500 employees is already a "medium" company - just to get the scale )

Regards

Markus

radek_kubka
13,804 Views
First  of all a little background info: we supply NetApp and VMWare ESX to small and medium business, means some hundreds of users, mailboxes, SQL databases just a handfull with some dozens of GB - so not the very big stuff.

In those environments we made very good experiences with this setup:

- ESX datastores connected by NFS for the sake of simplicity

- virtualized SQL or Exchange servers use Microsofts software iSCSI initiator and connect through the vSwitch to dedicated LUNs on Netapp

- we avoid the use of RDM, again because of simpler manageability and greater flexibility

Hi Markus,

I assume from what you wrote that you are dealing with FAS2050A quite frequently (aimed at SMBs). I have constant design struggle with this box & was wondering what's your take on this.

My concerns are as follows:

- FAS2050A has on board 4x 1Gbit IP & 4x 4Gbit FC ports plus a couple of expansion slots

- if we use NFS and/or iSCSI for storage connectivity & CIFS for flat files, these two types of traffic should be separated, so 4 IP ports (minimum number for fully redundant connectivity) are not enough

- if we add a couple of dual-port NICs to cure the problem above, no expansions slots are left and:

* adding external disk shelves (up front or in the future) means consuming onboard FC ports for back-end cabling = no option for running FC host connectivity if required at some point

* even if we are not bothered by the lack of front-end FC ports - no multipath (back-end) cabling is possible when mixing SATA & FC shelves (too few back-end ports)

Any thoughts? Implementation examples? Ingenoius workarounds? 😉

One fairly obvious approach would be not to use expansion NICs & install backend HC HBA instead, but that would mean mixing all IP traffic on the same physical ports.

One more thought - even if hosts are not using FC, some ports come handy for connecting tape library for NDMP backup.

Regards,

Radek

Public