ONTAP Discussions

CDOT 8.2.3 LUN used size reporting


I have a fair number of iSCSI LUNs, most of which are presented to Windows 2012 VMs. I believe that the "used" size doesn't report correctly from Windows back to the NetApp - an example 5000GB LUN might report 5100GB "used" if the LUN was shrunk sometime in the past, for instance. OCUM won't be much help here either, I don't think, but I could be wrong.


I need to assess the space used by these LUNs, but I don't have the luxury of dialing in to each server individually and checking the "used" size of the drive letters. The number of servers I'd need to touch (assuming I have access to them, which I do not) is too large, especially given that I will need to crank out a report every week.


I imagine someone else has run into a situation like this. What are some strategies you all have come up with to get a read on LUN utilization that doesn't involve logging in to each server and eyeballing Windows Explorer or SnapDrive?





only way to get this, use the powershell and schedule it to send the email to you.

thank you,



Do you have an example of a script (or even just a link) you'd use for this purpose?


How does Powershell get information that's different from commands run directly against the storage cluster?



There is no way to get OS space utilization from controller. If you write 1TB file on host and then delete it, space remains allocated in storage even though host now accounts for it as free.

The only way to get more reliable estimate from storage is to use thin provisioning and space reclamation.


Based on your original post "iSCSI LUNs presented to Windows 2012 VMs" - making one quick assumption that you are using RDMs and presenting the LUNs directly.  


If so, there is a way to keep both the storage (cDot 8.2+) and the OS (Windows 2012+) close to reporting the same used space.  cDot 8.2 + supports SCSI UNMAP functionality through the "space-allocation" property of the LUN.  If space-allocation is enabled, then OS's that also support SCSI UNMAP will detect it and use it when they delete data.  Windows 2012 will automatically use UNMAP to inform storage that blocks can be deleted when they are not in use in a LUN.  


Space-allocation can be enabled on LUN creation - it is off by default.  If you have a LUN that you modify to add space-allocation, you need to take the LUN offline then back online to trigger the change, so it becomes a disruptive transition.  With Windows you should also restart the server while the LUN is offline - even bouncing the LUN then rescanning disks doesn't always get Windows to figure out that UNMAP functionality is now available.  Space-allocation also applies whether the volume and LUN are thin or not - space-allocation goes to space usage tracking within the LUN so it applies even for thick provisioned space reserved LUNs.


Of course this also applies to LUNs presented to physical Windows 2012 machines as well.  Some Linux versions support SCSI UNMAP.  Similar support is also available in both ESX 6.0 (virtual machine version 11) and Hyper-V for purely virtual disks on Windows 2012+ VMs.  First the Windows guest uses SCSI UNMAP with the hypervisor, then the hypervisor coordinates with storage to release space with it's own UNMAP capability if the underlying storage is LUN based.  NFS based VMDKs under ESX will just release the unused space, if thin, once ESX knows it isn't used.


If you add space-allocation support to a LUN, it's also fairly easy to get most of the "unused" space back without going through a long running expensive initial utility process (like SnapDrive space reclamation).  In the available "empty" space in the LUN from the OS side, create a big file and delete it.  Wait a bit and do it again a few times - cDot and Windows will gradually get closer to equivalent values.



Hope this helps you.


Bob Greenwald

Lead Storage Engineer | Consilio LLC

NCIE SAN Clustered, Data Protection




Kudos and accepted solutions are always appreciated.


Thanks for this! It'll be a big lift to implement something like this across the board, but I'm willing to give it a shot. I'll have to report back here if it works out as expected. 


Thanks again!


I certainly understand the effort involved, as I had to do something similar.


The key application on storage I maintain creates one database per "project" which could be 100GB or 12TB in size - it's essentially unknown until the project gets started.  All of this is in SQL Server, so we use a bunch of big hosts to run multiple instances of SQL for availability and all.  Not knowing in advance which project will be active or grow at any given time, we spread these over a bunch of big LUNs to allow for that potential growth.  So I have about 75 x 8TB LUNs for data flying around, not to mention LUNs for logs and other miscellany.  Of course they certainly don't use all that data - average occupancy is about 60%, but each one is sized to allow that fast massive growth in a project which then gets reacted to later.


So to save real space it's also all thin and over provisioned pretty heavily, with monitoring.  If every project jumped at once there would be an issue, but that isn't the nature of the data.  I think it's 2500 or so current live databases.  Prior to 8.2+ and W2K12, managing this space was a nightmare because every LUN tended to use all the logical blocks over time as Windows spread the data around, which artificially increased our real physical storage demand.  Once I was able to convert everything and reset space allocation on the LUNs we dropped about 40% in total real storage required to maintain the same data.


Huge bonus when you can use it.  Affects everything downstream as well - if you run compression/dedupe post-process, less data to deal with.  Replication doesn't replicate blocks that have no value.



Hope this helps!


Bob Greenwald

Lead Storage Engineer | Consilio LLC

NCIE SAN Clustered, Data Protection




Kudos and accepted solutions are always appreciated.