Space Reclaimation On NFS Export For Vmware Datastore

Hello, I apologize in advanced if this is an obvious question, but we are just getting started with NetApp.

We are testing NetApp to work with our VMware infrastructure.  Currently we have connected a NFS export to a ESXi 4.1 host.

We moved a VM from a VMFS datastore to the NFS datastore and it seemed to go just fine.  The VM was just a 40 GB C: (40 GB .vmdk, thick lazy zeroed) and when we moved it to the NFS datastore it automatically changed it to a thin provisioned VMDK.  Both VMware and the NetApp were only showing 14 GB used on the datastore since most of the 40 GB is free space.  We then ran a de-dupe job on the NetApp and it went down to 11 GB. Sweet!

Next we added about 15 GB of data to the VM and noticed both VMware and the NetApp said the datastore had 26 GB of data (11 GB existing + 15 GB new).

But after we deleted the 15 GB of data in the VM the space did not appear to be reclaimed.  Both the VMware and NetApp sides show that 26 GB is still in use.

We tried to run a de-dupe job again, but that did not seem to reclaim the space.

So, my question, is there a process to free up space on the datastore after you delete the data in the VM hosted on that datastore?

It seems the thin provisioning is a one-way street...

Re: Space Reclaimation On NFS Export For Vmware Datastore


You have layers of dependancy here.

a) The VM with its operating system

b) Vmware vSphere

c) Netapp

When you added the data in the VM and deduped you have your "starting position".

Deleting data in the VM didn't actualy remove the data, it most probably just changed a number of pointers in the filesystem structure within the VM.

So if you try to dedupe it again, you will end up with about the same amount of space taken on the storage, because the data within the VM is essentially the same as before the delete.

If you want to try it, do a wiping of the unused space withing the VM with any program that will write zeroes to the unused blocks.

Rerun dedupe and the chance is that your VM's occupied space might even be smaller than before you started the tests.

And Netapp will tell you that the volume is suddenly less full, all thanks to the dedupe.

Note that writing to all the unused blocks defeats Vmwares thin provisioning and it will expand to full size, but dedupe will remove the wiped blocks.

As a hint, Vmware's Vmware tool has an option called "prepare to shrink", but it is unfortunately only enabled if your virtual machines is thick provisioned to start with.

But you could Storage vMotion your VM to let it expand, run Vmware tool "prepare to shrink" and then Storage vMotion back, while enabling thin provisioning.

The other layer  Vmware. So far it doesn't let through information regarding deleted files or block from the OS.

So it is only Vmware thin provisioning that helps. I'm not sure exactly about Vmware's criteria for what block are unused, but so far it has never failed me with removing blocks with valid data.

If it is a simple "all zeroes" algorithm it could use NFS "sparse files" capabilities or something similar in VMFS nowdays.

I would be nice if Vmware soon would enable the use of "unmap" capabilities of modern OS (think Trim with SSD's) where deleted files would "unmap" the previously occupied blocks within the VM and hint to vSphere that the storage that the blocks are available for other use.

No need to zero out anything (until next time it is used but that is thin provisioning forced to do any anyway), it would just work. And the storage would have less to dedupe the next run.

vSphere 5 has the Unmap capability with VMFS on supported systems (Ontap 8.0.1+ I think) but it is applied only when deleting whole files from the storage, not parts or a few blocks at a time.

That lets you thin provisioned Luns expand as data is added and shrink when files are deleted from the VMFS Luns(!) in a way mimicing NFS capability of dynamic provisioning.

The third layer is the storage and it's thin provisioning capability with dedupe and possibly compression. But I suppose you are familiar enough with that part.

Re: Space Reclaimation On NFS Export For Vmware Datastore


When you delete data within guest OS you have so called 'white space' - neither VMware, nor NetApp is aware anything disappeared from the guest file system.

If it Windows, you can get rid of it by running space reclamation (or 'hole punching') job in SnapDrive.


Re: Space Reclaimation On NFS Export For Vmware Datastore

The last I saw, snapdrive with nfs and smvi can space reclaim vmdks over nfs. I need to check again but saw it on an smsql preso where smsql with snap drive using vmdks over nfs.

Re: Space Reclaimation On NFS Export For Vmware Datastore

VSC 2.1.1 has got to be the new “Reclaim Space” feature. This new feature, which works for Windows VMs on NFS Datastores, with ONTAP 7.3.4 or later, reclaims space from deleted files inside the virtual machines.

Re: Space Reclaimation On NFS Export For Vmware Datastore

Sounds like it. And needs snapdrive too. they work together from what I saw but haven't tested it yet.

Re: Space Reclaimation On NFS Export For Vmware Datastore

[ Edited ]

Yes, but it seems to require that you restart the VMs, unlike the Storage vMotion solution (if you have the right vSphere license).

Re: Space Reclaimation On NFS Export For Vmware Datastore

Thank you dejan-liuit, looks like the VSC 2.1.1 does exactly what I was looking for.  Sucks it has be shutdown, but at least there is an option to get the space back.

Re: Space Reclaimation On NFS Export For Vmware Datastore

You still have option B, i.e. SnapDrive space reclamation being run in the guest OS - it doesn't require shut down.