VMware Solutions Discussions

Why NFS?

amiller_1
12,848 Views

So, there's definitely a lot of discussion around using NFS in virtualization scenarios. Although about 1.5 years old, this blog post is still very relevant.

http://viroptics.pancamo.com/2007/11/why-vmware-over-netapp-nfs.html

Given some recent discussion with a moderator (not sure if I should name names?), I just wanted to highlight that post given I've found it pretty useful -- useful enough I wrote up my own summary to have handy whenever I get into NFS discussions with customers -- here's my summary as well (full credit goes to the post above but thought I'd put it here as well in case helpful for anyone).

~~~~~~~~~~~~

Ranking these in order of importance....

  • Deduplication - possible to use deduplicated space savings with LUNs but MUCH more complicated (have to mess with fractional reserve, LUN thin provisioning, etc. -- possible to get caught overprovisioning and have real issues)
  • VMware Datastore sizing -- easy datastore growth (possible with VMFS) and shrinking (not possible with VMFS)
  • Larger datastores - no need to keep datastores smaller like with VMFS - up to 16 TB
  • Snapshots - can retrieve individual vmdk's from snapshots and/or mount vmdk's from snapshots for single file restore
  • SMVI - main benefit is ability to do faster VM restores (uses SnapRestore rather than LUN clone so can instantly restore a single VM to any previous snapshot)
  • VMDK Thin Provisioning
  • Ease of addition - somewhat easier than LUNs/VMFS
  • VMFS/RDMs - no need to deal with them
  • Single-file FlexClone (future feature) - can clone a vmdk instantly for fast provisioning
  • No single disk I/O queue as with iSCSI/FC so performance limitations are purely governed by pipe size and disk array size.
  • Faster failover to SnapMirror remote copies (less steps plus faster steps) - no need to do LUN resignaturing
  • ESX server I/O is small block and extremely random meaning that bandwidth is less important (i.e. GigE works well).
  • Can dump individual VM's via NDMP
  • No FC zoning, switch cost, HBA's, compatibility matrices, or LUN IDs
21 REPLIES 21

amiller_1
12,251 Views

And....I meant to say that I'd be very curious if anyone has more to add to that list.

forgette
12,647 Views

Hi Andrew,

     The File Level FlexClone feature you mentioned is available as of Data ONTAP 7.3.1 (which is GA - Generally Available).  The Rapid Cloning Utility version 2.0 (RCU 2.0) leverages this feature and provides this functionality directly from the VI Client (there are other features as well).  Not only is File Level FlexClone extremely fast, but the resulting files don't take up additional space.  The RCU 2.0 is a free tool (due in April) that will require FlexClone, NFS, and Data ONTAP 7.3.1P2.

-Eric

amiller_1
12,648 Views

Ah yes -- I saw the RCU 2.0 blog posts last week and should have revised that point. Looking forward to using it (once we have 7.3.2 (i.e. the GD release hopefully) and vSphere 4.0 that is ).

If nothing else, it's surprising that Dan Pancamo (viroptics) had heard about that back in 2007.

Thanks again for your work on mbralign...have been using it quite successfully.

fcocquyt
9,680 Views

One for Eric:

I was excited to try RCU after upgrading to ONTAP 7.3.1.1 but ran into issues with making RCU ssh into the vFilers (aka MultiStore)

I opened a case on this and it was closed with the comment "Netapp does not support RCU" !?

So my hope is RCU will support vfilers - they allow SSH, but there is no built in interactive shell.

My goal is to enable file level fast cloning via RCU.  Currently cloning is 1Gb/minute...

thanks

adamfox
9,680 Views

The Tech Support person you spoke to is mistaken.  If you want to push this, you can give them this internal link (it's only valid within NetApp) which when he/she reads it should not only contradict that statement but instruct the GSC how to handle such cases.

http://wikid.netapp.com/w/NGS_NPI/RCU/RCU_2.0#Engineering_Support_and_Escalation_Info

Hope this helps.

fletch2007
9,680 Views

Hey thanks Adam - I re-opened the case (#2000881634) citing the internal doc you provided - will see how it goes now!


fletch2007
9,673 Views

They still say its not supported - see new engineer's reply and my response below:

My response;

Ok, I don’t want to use RCU if I can not get support
Regardless -  my goal remains the same: I want to use file level flexclone in ONTAP 7.3.1.1 to clone a file /vol/vms/vm1/vm1.vmdk to /vol/vms/vm2/vm1.vmdk (I can rename the file to vm2.vmdk later)

How do I accomplish this via the ONTAP  commandline?

thanks


On 7/1/09 6:40 PM, "neweng@netapp.com> wrote:

Hi, Fletcher,

I checked with our engineers on your questions who insisted that the no-support status of this utility remains in effect.

The only publicly accessible support documentation is available on the RCU2./01 Description Page where you may review Release Notes, Best Practices, and an Installation and Administration Guide.

http://now.netapp.com/NOW/download/software/rapid_cloning/2.0.1/

You may consult with your NetApp Sales Engineer to learn whether there are alternatives available.

I apologize for your inconvenience.

fletch2007
8,095 Views

Netapp support is researching how to use file level flexcloning on the command line - I found this great explanation on Scott Lowe's blog from Dec '08:

http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/

sounds like its not supported in a vfiler context.

At least I know how to run it on the command line now - in my testing it does not seem rapid at all - copying individual blocks 1 % at a time

Seems to defeat the purpose of using this over traditional VM cloning via VMWare VIC

thanks

forgette
8,095 Views

Sorry for the delayed response, I was away from the keyboard for an extended period of time (on vacation).

Hopefully this will clear up some of the confusion...

Support answers:

Adam is correct, RCU 2.x (and higher) is a supported product.

File level FlexClone does not work in a vFiler context (even via the clone command).  Therefore, RCU 2.0 is not supported in a vFiler context.

RCU 1.0 (which is only available from the toolchest is only supported via PVR).

Protocol answers:

The RCU 2.x relies on API calls exclusively, therefore ssh isn't required.

Why RCU 2.x?

The RCU was developed so NetApp/VMware customers could take advantage of the speed and capacity savings without having to understand the details of the storage cloning technology.  The RCU adds an application (ESX) specific layer of logic that chooses the most efficient manner of storage object cloning.  The RCU may even change the method of cloning "mid stream".  If you are seeing the clone command duplicate blocks, it is likely that the volume is already 'sis enabled' and the vmdk file is already heavily deduplicated.  Clone a VM into the datastore using the native vCenter clone process, then use that as the source.  Your results should be quite different.

Hope this helps,
-Eric

fletch2007
8,095 Views

Eric, thanks for the reply - yes the volume is sis (dedup) enabled.

df -sh
Filesystem                used      saved            %saved
/vol/vm2/               1333GB     1852GB          58%

Is it the case we can not enjoy both dedup savings and rapid file level cloning simultaneously?

We are looking at implementing VDI in the future so it would be very nice to have both.

A few follow up questions:

1) If RCU is supported, why have the two cases I opened been closed and stalled with "RCU is not supported"?

On the current case: 2000881634 - the engineer can not find documentation on how to perform the  file level clone from the command line

I found it in Scott Lowe's blog post from Dec 2008: http://blog.scottlowe.org/2008/12/03/2031-enhancements-to-netapp-cloning-technology/

Is there a technical doc # that fully describes the RCU and file level cloning?

2) If RCU is based of the shared block model of dedup, why are blocks being duplicated at all ?

Shouldn't the new clone file just consist of pointers to the blocks of the old file?

3) In my test I was able to run the file level clone command from vfiler0 on a volume that is actually NFS exported from a different vfiler.

Could RCU be made to support vfilers in the future?

thanks for clarifying

forgette
8,095 Views

"Is it the case we can not enjoy both dedup savings and rapid file level cloning simultaneously?"

You can absolutely enjoy the benefits of dedup and rapid file level cloning simultaneously.  In fact, they are both based on the technology.  The file level cloning technology starts you out consuming very little space and the dedupe keeps you that way.  There are limits on any technology, what RCU does it navigate the limitations for you, choosing the best method of storage cloning/copy for each situation.

"If RCU is supported, why have the two cases I opened been closed and stalled with "RCU is not supported"?"

I think there may have been some confusion there.  Currently Multistore isn't supported with RCU 2.0, however RCU 2.0 is a fully supported product.  I've forwarded this discussion to the appropriate folks to make sure you get the right answer next time. 

"Is there a technical doc # that fully describes the RCU and file level cloning?"

There is a Technical Report (TR-3742) that covers the use of FlexClone to create clones of file and LUNs.  This report is available under NDA.  I think you need to contact your NetApp System Engineer for access.

"If RCU is based of the shared block model of dedup, why are blocks being duplicated at all ?

Shouldn't the new clone file just consist of pointers to the blocks of the old file?"

Yes, you are correct, that is idea.  As I mentioned above, there are limits on any technology.  In certain situations, the controller needs to duplicate a data block.  The details are covered in the TR.  The good news is that this happens less often in later release of Data ONTAP.

"In my test I was able to run the file level clone command from vfiler0 on a volume that is actually NFS exported from a different vfiler."

That is actually a pretty good workaround for now.  RCU isn't implemented that way currently because it breaks the Multistore security model (ie: I have to have access to vfiler0 to make a change in vfilerX).

"Could RCU be made to support vfilers in the future?""

Multistore (vfiler) support for the RCU is on the roadmap.  IMHO, Multistore is a (very cool) unique technology and a perfect match for virtualized environments. Please let your NetApp System Engineer or Sales Representative know if Multistore support for RCU is important to you.

konkle
12,640 Views

When I think about Virtualization on NFS I jump right on the storage efficiency and density band wagon.  Don't get me wrong, NetApp's blocks story with virtualization is awesome too.  However, when someone asks me (Technical Evangelist for NAS<-- 🙂) how many VM's can be stored on a single NFS mount point the answer is astonishing -- How many does your hypervisor vendor support?

At this point it's about 256 <--- look familiar?  2^8; good ole 8-bit there.  So now you have 256 Virtual machines running off the same export.  Now imagine if those are Virtual Desktops; that's quite a bit of density vs blocks.  SCSI limits to about 256 LUNs which is on par; but why do you need seperate luns?  If You need seperate luns if you want to move the VM from one physical server to another.  In that case the LUN must move - so having a lot of VMs per LUN can constrict your virtual data center management.

Beyond density I get into the futures of NFS, which include NFSv4.  The big deal with NFSv4 is delegations and lock management.  Not that NFSv3 has any issues, but NFSv4 read/write file delegations would give hypervisor servers more local control over the data and file locking is more resolute.  Further, NFSv4 supports the notion of referrals which would allow a NFS server node to redirect a NFs client to a less burdened NFS node in a clustered storage environment, such as Data ONTAP GX.  All for naught, as the primary NFS version is version 3, most hypervisor servers don't support NFSv4 and Data ONTAP GX doesn't support NFSv4 today 😞

However, in the not to distant future hypervisor vendors may chose to support NFSv4.  Data ONTAP 7.3+ supports it and Data ONTAP 8 cluster mode storage will support NFSv4 - combining it with hypervisor support could be truly beneficial.  Then, just over the horizon is parallel NFS (NFSv4.1) which gives rise to more predictable performance at the NFS client (hypervisor server).  The predictability arises out of the support for parallel data servers, since those data servers can be clustered you can create NFS volumes across at least two nodes that are also clustered - giving you four systems to support parallel reads and writes.  If one node needs to be upgraded at least three nodes would still be on line servicing requests (for that volume).  Finally, during all this parallel data management, if the workload is primarily read then you can introduce FlexCache into the mix.  It supports data center scale-out and remote-office/branch-office accessibility by NFSv3 clients today.  In the world of ONTAP 8 - FlexCache will get even better - shhh 🙂

So if you start using NFS for all the reasons above; you'll be ready to take advantage of the system enhancements coming with NFSv4 and NFSv4.1.

Perhaps you can encourage your hyperviser vendor to do the same 🙂

~~Joshua

amiller_1
12,647 Views

Ah...fantastic info (always nice to have future-looking stuff for customers....makes me look knowledgeable, doncha know? ).

konkle
12,647 Views

No worries; what's even more awesome about NFSv4 is the pseudo-filesystem.

Today you can get a 6080 rack and stack 1TB SATA drives on it to the tune of 1PB+, drop in some PAM/Flash and  turn on NFSv4.  Mount the filer at / and get access to the entire 1pb of data through a single mount point - badda bing badda booom 🙂

the downside is that it's available at volume mount points of 16TB each :-(.  So 100TB's is available as 7 directories off the root of a filer

/--vol1

/--vol2

/--vol3

/--vol4

/--vol5

/--vol6

/--vol7

Then you can write across all 7 volumes/directories up to 112TB (less file system format + dedupe savings + snapshot savings) 🙂

Downside is you need 70 directories for a 1PB of data &colon;-(, in a future version of data ontap you could get that with 10 directories, perhaps - maybe 🙂  yay

JK

HendersonD
12,647 Views

I have been using NFS for my VMWare datastore for about 6 months. It has worked out great and many of the advantages you sight are the reasons we switched from using LUNs to NFs.

The only part we have been struggling with is backup. We use SnapMirror to replicate our primary NFS datastore to another filer across campus. We then use Commvault to do an NDMP dump to our disk based backup solution. Commvault sees the vmdk files as monolithic files so my fulls and incrementals are the same size.

We have been looking into using VCB but am a little reluctant to jump in that direction to get file level backup so my incrementals are truly incremental and are small. These two articles is more like what I am looking for.

This article talks about mounting the NFS volume and doing backups that way

http://storagefoo.blogspot.com/2007/09/vmware-on-nfs-backup-tricks.html

This article is even more interesting and integrates Tivoli with SnapMirror

http://vmwaretips.com/wp/2009/02/19/goodbye-backup-agentsgoodbye-vcb/

The question I have is Netapp looking to do the same type of integration they did with Tivoli with other backup vendors like Commvault? Any other choices I should be looking at for file level backup when I have an NFS datastore?

amiller_1
12,647 Views

Hmm....I might be missing something obvious.....but what about using SnapVault at the remote end? It would still be on disk and you could then just keep more snapshots at the remote end.

aaronh
7,493 Views

lrhvidsten
12,647 Views

In regards to:

"VMware Datastore sizing -- easy datastore growth (possible with VMFS) and shrinking (not possible with VMFS)"

I'm curious, how are you going about shrinking the vmdks? While I appreciate the native thinness of NFS for new vm's, one loses that benefit when conducting storage vmotions like we had to for migrating off of das onto our new filers.

It be nice if there was some easy tool like the space reclaimer in Snapdrive.

So far, the only solutions I've come up with are to try out the mbralign tool which will require downtime or dedupe the volume. However, we're not totally ready to dedupe our fibre disks and are only testing it on sata. I was told by someone at Netapp that the mbralign tool might do the trick but haven't had a chance to test it yet...

I did come across this page, but am unsure if it's a solution or not:

http://www.rtfm-ed.co.uk/?p=40

amiller_1
9,680 Views

So...that point refers to datastore shrinking actually. That is, you can't shrink a VMFS datastore (provided either via iSCSI or FC) but can shrink an NFS datastore (just shrink the FlexVol on the NetApp).

And....if you use an NFS datastore where dedup savings show up immediately, you get some benefits of the vmdk's being empty as the identical zero's inside get de-deped.....hopefully clear as mud.

forgette
9,680 Views

The mbralign tool will take a thick type vmdk and make it thin (when --sparse is specified), however this isn't required to reclaim space.  The use of the FAS dedupe feature will reclaim much more space and will do so while the VMs are running.  With regard to the rtfm-ed.co.uk post, FAS dedupe will get you back much more space, without the pain of having to do an export/import.  I have had customers leverage SDelete tool (link in Mike's post) to increase the effects of FAS dedup.  The tool simply writes zeros to previously allocated space within the guest filesystem.  You can imagine how nicely a bunch of zeros dedupes.  😉  When this is done across the whole datastore, the dedupe savings can be significant.  Obviously the amount of savings will depend on the age of the guest filesystems and how much data has been added and removed over time.  The other great feature of FAS dedupe is that it works on any type vmdk (thin and thick) and on VMFS and NFS datastores.

Hope this helps.

-Eric

Public