Thin provisioning sure sounds like a good idea, too bad that over time the storage system can’t tell what is actually unused space. Without this ‘knowledge’ the storage system cannot reclaim the unused space and this renders the process of thin provisioning practically useless.
With a thin provisioned volume what you really end up with is an ever increasing high-water mark being set on the used space in your volumes. The problem is that the operating systems that are using the storage system (ESX in my world) don’t tell the storage system what they delete or mark as free space. Without this communication between the hosts and the storage system, thin provisioning is only a poorly implemented good idea.
There are ‘hacks’ out there that will go and write zeros across the LUNs. By writing zeros to large sections of the LUNS the storage system can tell that the space is unused. But, if you are going to implement a feature of your storage system you shouldn’t have to rely on a hack to make it really work. Seems more than a little deceptive to me. I am curious how others are dealing with the thin provisioning fallacy.
Good points and a challenge for many vendors. With ntfs and snapdrive there is space reclamation to deal with this from the host side. With virtualized hosts we still see 50% or more dedup which helps and thin give that space back to the volume
Unfortunately, NFS does not really help as long as VMDK are not deleted; nor UNMAP support in ESX5. Here we need explicit support from hypervisor first. I am not sure whether ESX offers any right now; I hope it is on their roadmap.