This has probably been answered several times but i can't find anything for Thick lun/volumes. Maybe I have a misunderstanding of what a thick netapp lun is. Our vsphere environment is provisioned with Thick volumes that contain a Lun (1:1) with space reservation enabled. Hence I think these are Thick LUNs, please correct me if I am wrong. Running ontap 9.1 with ESXi 6.0 and vmfs 5
I was going to manually run the unmap command from the esxi to reclaim space after noticing that space was not being given back to the lun/volume (expected on vmfs -5) backing the datastore and noticed that it is not supported:
esxcli storage core device list -d naa.600a098038303049332b454a516b504e
I did noticed that space allocation is not set on the lun but based on a netapp web doc I think it is only meant for Thin luns and i think mines are Thick.
All i am trying to do is to give the space back to the storage when I Svmotion or delete a VM, and based on the size of some of the volumes is has never been done. Now, will enabling space allocation on the lun work even though it says it is only for thinly provisioned ones. If it does, i dont see the point of doing it becasue ill have to offline the lun to do so, but it will be good to now for future luns.
Running it on Thick Luns (space reservation enabled) will simply not save you any space, as it pre-allocate a space on the volume and the aggregate (the best practice says that space-reserved LUN has to be on space guaranteed volume - see 3rd paragraph here)
it could potentially have saved you space trough Dedup if it was allowing VAAI commands to thick provisioned volume, but it would have require to actually go and change the data to 0's in every VAAI request instead of just unmappping the block as it will do with Thin provision.
if you really want to save space - go ahead and change the LUN to thin provision:
Note that there is no performance penalty when using thin provisioning with ONTAP systems; data is written to available space so that write performance and read performance are maximized. Despite this fact, some products such as Microsoft failover clustering or other low - latency applications might require guaranteed or fixed provisioning, and it is wise to follow these requirements to avoid support problems.