VMware Solutions Discussions

VMWare datastore free space doesn't match the LUN information

fedaynnetapp
26,332 Views

Hi,

 

This is the background:

 

ESXi 5.5

NetApp Filer

LUN thin provisioned of 700Gb

Datastore thin provisioned of 700Gb

 

The Datastore space reached almost 100% of the space and I'd had to delete a few VM off the datastore and run a few unmap commands on the ESXi server.

 

After the deletion. nothing seems to have changed in terms of free space fromt the vSphere web client and the ESXi SSH session.

 

The output of the dl -h command shows that the datastorage is at 97% of the DS size and you get the same number from the datastorage vSphere view.

 

If I go to System Manager or VSC, the LUN free space is 90%, nonetheless.

 

So, there havent been any change in terms of free storage after the deletion.

 

Could someone tell me where's the space free after the VM machines deletion?

 

Thank you.

11 REPLIES 11

JSHACHER11
26,294 Views

 

 

TR-3749:

 

DEDUPLICATION CONSIDERATIONS WITH VMFS AND RDM LUNS


Enabling deduplication when provisioning LUNs produces storage savings. However, the default behavior
of a LUN is to reserve an amount of storage equal to the provisioned LUN. This design means that
although the storage array reduces the amount of capacity consumed, any gains made with deduplication
are for the most part unrecognizable, because the space reserved for LUNs is not reduced.
To recognize the storage savings of deduplication with LUNs, you must enable NetApp LUN thin
provisioning.


Note:  Although deduplication reduces the amount of consumed storage, the VMware administrative
team does not see this benefit directly, because its view of the storage is at a LUN layer, and
LUNs always represent their provisioned capacity, whether they are traditional or thin
provisioned. The NetApp Virtual Storage Console (VSC) provides the VI administrator with the
storage use at all layers in the storage stack.


When you enable dedupe on thin-provisioned LUNs, NetApp recommends deploying these LUNs in
FlexVol volumes that are also thin provisioned with a capacity that is 2x the size of the LUN.  When the
LUN is deployed in this manner, the FlexVol volume acts merely as a quota. The storage consumed by
the LUN is reported in FlexVol and its containing aggregate.

fedaynnetapp
26,288 Views

Hi,

 

I missed to say that dedup is enabled but LUN space reservation and Volume Fractional Reserver are disabled. Snap server is set to 0%.

 

It's a single LUN-Volume.

 

Thank you.

YIshikawa
26,245 Views
Enough free space in aggregate or volume?
Run "df -h", and "df -Ah" on Data ONTAP and check space usage at volume and aggregate level.

fedaynnetapp
26,223 Views

More free space than enough for both, volume and aggregate.

 

Volume      705GB      218GB      486GB      31%

Aggregate  28TB       22TB     6186GB      79%

 

Thank you.

fedaynnetapp
26,145 Views

Any update on this?

 

Thank you.

RULLMAN84
26,124 Views

Do you have any snapshots within the DataStore?

 

Ryan

fedaynnetapp
26,098 Views

Hi,

 

There are a few VM snapshots within that store.

 

But the Datastore free space remains the same even after deleting VMs.

 

There's no Snapshots at the Storage Array level.

 

Thank you.

YIshikawa
26,053 Views

If I go to System Manager or VSC, the LUN free space is 90%, nonetheless.

 

I could only find out "% Used" column in System Manager. Do you mean "% Used" is still 90% even after you have deleted files on DataStore?
Used space at LUN level does not reflect usage of filesystem level. Once actual block is allocated from aggregate, its is counded as "used" and is never freed even after removing files from VMFS. Of cource, this "used" blocks are reused by VMFS.

 Usage statistics at LUN is useless for users and admins, so ignore difference between "df" on ESXi and LUN used space on ONTAP.

fedaynnetapp
26,016 Views

> I could only find out "% Used" column in System Manager. Do you mean "% Used" is still 90% even after you have deleted files on DataStore?

Yes, that's what I'd like to mean.

> Used space at LUN level does not reflect usage of filesystem level. Once actual block is allocated from aggregate, its is counded as "used" and is never freed even after removing files from VMFS. Of cource, this > "used" blocks are reused by VMFS.
> Usage statistics at LUN is useless for users and admins, so ignore difference between "df" on ESXi and LUN used space on ONTAP.

I deleted the VM machines and afterward, I ran an unmap command over the Datastore to give back the freed blocks to the LUN. I thought that Storage Array takes advantage of VAII and VASA to solve these kind of issues.

 

Thank you.

 

DCAGary
11,401 Views

I found a similar issue, are you sure that the unmap command has actually ran? Is delete status supported? To check run "esxcli storage core device vaai status get". If delete status is unsupported run "lun set space_allo <lun path> enable". I removed a 6TB vmdk and 6 weeks later i'm still having to reissue the unmap command from the vm host. Apparently there is a bug in 7-mode which has been made publicly known yet which causes the unmap command to time out (it took me 6 weeks of speaking to support to find this out). You can see if it's making any progress by running esxtop, select u for disk device view then f, o and p. Take a look at the MBDEL/s column, if there's anything in there unmap is still running (very slowly in my case), if not issue another unmap and see if anything appears.

RANJBASSI
8,486 Views

I had the same issue and we use FC LUNS, presented the same way like how you do it. I did have to run specific flags

 

In order for these features of VAAI Thin Provisioning to work as expected, the LUN must have space allocation enabled. This is NOT the default in any version of Data ONTAP. For Data ONTAP7-Mode, run the lun set space_alloc <lun path> enabled command.
 
Firstly we need to find out the Lun path for each volume do the following (ensure this is run on each controller):
 
e.g. Lun show command 
 
To set it would be:
 
lun set space_alloc /vol/flexvolname/lun_name enable.
 
To check whether it is enabled type lun set space_alloc /vol/volume/lun_name
 
You also need to check on the ESXi host side whether it setup correctly.
 
. Logon to each host CLI uisng putty and type the following:

If int Value is set to 0, this needs to be enabled

esxcli system settings advanced list --option /VMFS3/EnableBlockDelete
   Path: /VMFS3/EnableBlockDelete
   Type: integer
   Int Value: 0  <<<<<<<<<< 0 means Disabled
   Default Int Value: 1
   Min Value: 0
   Max Value: 1
   String Value:
   Default String Value:
   Valid Characters:
   Description: Enable VMFS block delete
 
Type esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete to enable.
 
Then type 'esxcli  storage vmfs extent list' to list each datastore (across both hosts) and the associated UUID and naa number
 
Pick any naa for a volume which needs the task run on.

Using NetApp_Prf2_ESX_W2k3_Sas_01 with naa.60a9800038303045525d4559446d2d36 as an example

type esxcli storage core device vaai status get -d naa.60a9800038303045525d4559446d2d36

 

VAAI Plugin Name: VMW_VAAIP_NETAPP
   ATS Status: supported
   Clone Status: supported
   Zero Status: supported
   Delete Status: supported

 You should see delete status say 'supported'. if this say unsupported it means one of the previous steps hasn't been performed properly.

You can run another command to check a more detailed status:

4. esxcli storage core device list -d naa.60a9800038303045525d4559446d2d36

   Display Name: NETAPP Fibre Channel Disk (naa.60a9800038303045525d4559446d2d36)
   Has Settable Display Name: true
   Size: 2621563
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.60a9800038303045525d4559446d2d36
   Vendor: NETAPP
   Model: LUN
   Revision: 820a
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: yes
   Attached Filters: VAAI_FILTER
   VAAI Status: supported
   Other UIDs: vml.020037000060a9800038303045525d4559446d2d364c554e202020
   Is Shared Clusterwide: true
   Is Local SAS Device: false
   Is SAS: false
   Is USB: false
   Is Boot USB Device: false
   Is Boot Device: false
   Device Max Queue Depth: 64
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

 

More info: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513

To check the status of this login to On Command Manager, and check the respective LUN for the volume where the task is being run on and you should begin to notice that the free space increases

 

Login to NetApp OnCommand Manager, choose controller, go to volumes, pick the volume that has had the previous task run on, and click storage efficiency

Ensure 'scan entire volume' is ticked as this will reinitailise the deduplication for that volume. This process will take some time depending on size of volume.  

 

After this is run space savings should now be accurate on both the LUN and volume layer within the NetApp console.  

 
 

 

https://community.netapp.com/t5/Data-ONTAP-Discussions/VAAI-Unmap-delete-status-unsupported-NetApp-FAS-Array-DataOntap-8-2-1/m-p/107028

 

Public