2015-11-17 01:17 AM
I've got a strange situation, which I 've never seen before.
We a running some VM which has Thin Provisioned VMDK of 400GB , placed on NFS export from Netapp 7-mode.
We have 486GB free capacity on the volume, which is shared through NFS. And volume has "Storage efficiency" enabled.
VMware admin tries to expand VMDK to 500GB. He just navigates to VM properties and sets disk size to 500 ( instead of 400).
This should set VMDK maximum utilization to 500GB as I understand.
However when hi clicks "Ok" He receive an error " No space left on DS" or something like that. But We have 484GB free, which is reported by NetApp and by vCenter.
From NetApp perspective I see that vCenter issues request to get 501 GB instead of 100.
I haven't idea why it works in this way. Why ESX requests for 500 GB but not 100?
2015-11-17 02:54 AM
I've added 400 GB to FlexVol on NetApp and we did vmdk expansion in vCenter. Now we have 900GB consumption on storage level. Also disk in vCenter was suddenly converted from Thin to Thick
2015-11-17 04:06 AM
Ive found some mesage in ESX logs about VAAI space reservation. Also I've browsed datastore from other system and vmdk flat files exactly as I've defined -- 500G
It seems that vCenter performed some space reservation and didn't release it.
Is it possible that NetApp keeps that space reserved after vmdk expansion?
2015-11-17 04:52 AM
Found some more hints.
It seems, that vmware performed space reservationand to unlock that space I need either to perform storage migration either do VAAI unmap through the CLI with the command esxcli storage vmfs unmap -lbut in other sources it is adviced to use vmkfstools -y