Subscribe

Wrong usage showing on NFS datastores vSphere 5.1

Hi all

We are using Netapp NFS exported volumes as a datastores for our VMware environment running on vSphere 5.1. Recently we received some new disks and I decided to reorganize the datastores a bit. During that process I have migrated all VMs stored on datastore1 on volume on old aggregate to datastore2 on volume on new aggregate. We are running also deduplication on volumes. I am now actually facing 2 different issues:

Main issue is:

We had situation when the datastore1 (volume is 15TB size) was occupied to such state that it had only below 2TB free - this has created incident in our monitoring as we set that on 2TB threshold. Then I have removed the data there and it should be nearly empty - on Netapp level I see there is about 7TB free (second issue). Mostly I see this figure also on Vcenter, but still from time to time it will show the old value under 2TB and this creates the incident in monitoring again. Bad is when this happens during night time resulting that I receive phone call that we are running out of space... Obviously I would like to find out why this is happening and make sure that Vcenter know the correct value all the time. Has this happened to someone before - that infromation about used space on NFS datastores in vCenter would be intermittently changed from actual to those what been in past?

Second issue:

When I removed most of the data from the datastore1 (there are some ISOs still stored and some vSphere HA files etc. but nothing major). I still see some space is being occupied also on Netapp level - now there are still some snapshots about 4TB but it shows that occupied space is 8TB... Any idea what might be consuming the 4TB space? Anything to do with dedup or fact that we used non granted volume sizes in the past?

Thanks in advance for any suggestions

David