ONTAP Discussions

VMware datastore free space doesn't match LUN free space

IlirS
29,225 Views

Hello,

 

I recently noticed that two LUNs on Netapp storage are approximately 99% and 98% used. These LUNs are presented as VMware datastores to some ESXi hosts.

 

From the VMware side the first datastore has 72% free space and the second has 50% free space.

The difference is very big. It seems that Netapp doesn't know that VMware has freed up space from the datastores.

On both related volumes storage efficency is disabled.

 

Is there any way to reclaim free space on these netapp luns ? 

Can this operation be executeed online, without any impact on live virtual machines residing on these luns?

 

Model: FAS2552

Version: NetApp Release 8.2.2RC1 7-Mode

 

Thank you and best regards,

Ilir

1 ACCEPTED SOLUTION

mrahul
29,140 Views

 

Hi ,

 

  An explanation for your query can be found in the NetApp cDOT official documentation,

 

Please refer the link  https://library.netapp.com/ecmdocs/ECMP1196784/html/GUID-93D78975-6911-4EF5-BA4E-80E64B922D09.html 

 

Starting with Data ONTAP 8.2, you can use the space-allocation option to reclaim space and notify the host when a thinly provisioned LUN cannot accept writes.

 

Even if your NetApp LUNs are configured properly for space reclamation, be sure that you are running a supported configuration before initiating a SCSI UNMAP operation from an ESXi host. Please refer Vmware compatibility guide for this.

 

It is always better to enable the space-allocation option when you are provisioning the lun.

View solution in original post

8 REPLIES 8

vpenev
29,176 Views

Hi Ilir, 

As you have already explained, it seems that from Netapp point of view, that space is not free. NetApp only knows when a block has been modified, it doesn't know if it was deleted by Vmware. So what Vmware see is the actual available space within the LUN. I would recommend you to take a look at the use of UNMAP primitive from ESXi to reclaim unused space in the LUN as described in http://www.netapp.com/us/media/TR-4333.pdf - 3.2 Space Reclamation. 

You can also review this VMWare article on reclaiming deleted blocks on thin-provisioned LUNs - https://kb.vmware.com/s/article/2014849

Note that UNMAP should be a non-disruptive process, but there still might be a performance hit. 

Hope I was able to help. 

 

Cheers,

IlirS
29,157 Views

Hello,

 

I tried the recommanded procedure from VMware "Using the esxcli storage vmfs unmap command to reclaim VMFS deleted blocks on thin-provisioned LUNs" but when I execute the below command

 

"esxcli storage vmfs unmap -l Netapp_V2A_904GB"

 

I get the following error message:

"Devices backing volume 59f6fe49-c719c920-cf64-e41f13cbff7c do not support UNMAP"

 

The datastore is connected to ESXi host through FC protocol.

Type of datastore is VMFS5

 

BR,

Ilir

mrahul
29,141 Views

 

Hi ,

 

  An explanation for your query can be found in the NetApp cDOT official documentation,

 

Please refer the link  https://library.netapp.com/ecmdocs/ECMP1196784/html/GUID-93D78975-6911-4EF5-BA4E-80E64B922D09.html 

 

Starting with Data ONTAP 8.2, you can use the space-allocation option to reclaim space and notify the host when a thinly provisioned LUN cannot accept writes.

 

Even if your NetApp LUNs are configured properly for space reclamation, be sure that you are running a supported configuration before initiating a SCSI UNMAP operation from an ESXi host. Please refer Vmware compatibility guide for this.

 

It is always better to enable the space-allocation option when you are provisioning the lun.

IlirS
29,135 Views

Hello,

 

Thank you for your support and help.

 

The datastores related with these LUNs are presented as FC Datastores, not ISCSI Datastore.

Is there any other procedure for FC protocol ?

 

I'm not able to open the link you suggested.

 

BR,

Ilir

mrahul
29,132 Views

As far as I know FC should'nt be a concern.

 

 

Clicking on the link from community page doesnt seems to direct it as expected. 

 

 

Can you copy the link and paste it in your browser? This should work..

IlirS
29,090 Views

Hello mrahul,

 

Thank you for your help again.

 

I managed to open the link as you suggested.

I have already freed everything from the datastore and it has 100% free space now. The corresponding LUN is still showing 99% used.

If I enable the space-allocation on this LUN now, will it free up the space on the LUN ?

 

BR,

Ilir

Trubida
29,057 Views

Since the datastore is empty, VMware won't issue the unmap commands again. You can enable the space-allocation on the LUN but it won't free-up on the NetApp until you run space-reclimation or put data in the datastore and delete it.

RANJBASSI
28,954 Views

I had the same issue and we use FC LUNS, presented the same way like how you do it. I did have to run specific flags

 

In order for these features of VAAI Thin Provisioning to work as expected, the LUN must have space allocation enabled. This is NOT the default in any version of Data ONTAP. For Data ONTAP7-Mode, run the lun set space_alloc <lun path> enabled command.
 
Firstly we need to find out the Lun path for each volume do the following (ensure this is run on each controller):
 
e.g. Lun show command 
 
To set it would be:
 
lun set space_alloc /vol/flexvolname/lun_name enable.
 
To check whether it is enabled type lun set space_alloc /vol/volume/lun_name
 
You also need to check on the ESXi host side whether it setup correctly.
 
. Logon to each host CLI uisng putty and type the following:

If int Value is set to 0, this needs to be enabled

esxcli system settings advanced list --option /VMFS3/EnableBlockDelete
   Path: /VMFS3/EnableBlockDelete
   Type: integer
   Int Value: 0  <<<<<<<<<< 0 means Disabled
   Default Int Value: 1
   Min Value: 0
   Max Value: 1
   String Value:
   Default String Value:
   Valid Characters:
   Description: Enable VMFS block delete
 
Type esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete to enable.
 
Then type 'esxcli  storage vmfs extent list' to list each datastore (across both hosts) and the associated UUID and naa number
 
Pick any naa for a volume which needs the task run on.

Using NetApp_Prf2_ESX_W2k3_Sas_01 with naa.60a9800038303045525d4559446d2d36 as an example

type esxcli storage core device vaai status get -d naa.60a9800038303045525d4559446d2d36

 

VAAI Plugin Name: VMW_VAAIP_NETAPP
   ATS Status: supported
   Clone Status: supported
   Zero Status: supported
   Delete Status: supported

 You should see delete status say 'supported'. if this say unsupported it means one of the previous steps hasn't been performed properly.

You can run another command to check a more detailed status:

4. esxcli storage core device list -d naa.60a9800038303045525d4559446d2d36

   Display Name: NETAPP Fibre Channel Disk (naa.60a9800038303045525d4559446d2d36)
   Has Settable Display Name: true
   Size: 2621563
   Device Type: Direct-Access
   Multipath Plugin: NMP
   Devfs Path: /vmfs/devices/disks/naa.60a9800038303045525d4559446d2d36
   Vendor: NETAPP
   Model: LUN
   Revision: 820a
   SCSI Level: 5
   Is Pseudo: false
   Status: on
   Is RDM Capable: true
   Is Local: false
   Is Removable: false
   Is SSD: false
   Is VVOL PE: false
   Is Offline: false
   Is Perennially Reserved: false
   Queue Full Sample Size: 0
   Queue Full Threshold: 0
   Thin Provisioning Status: yes
   Attached Filters: VAAI_FILTER
   VAAI Status: supported
   Other UIDs: vml.020037000060a9800038303045525d4559446d2d364c554e202020
   Is Shared Clusterwide: true
   Is Local SAS Device: false
   Is SAS: false
   Is USB: false
   Is Boot USB Device: false
   Is Boot Device: false
   Device Max Queue Depth: 64
   No of outstanding IOs with competing worlds: 32
   Drive Type: unknown
   RAID Level: unknown
   Number of Physical Drives: unknown
   Protection Enabled: false
   PI Activated: false
   PI Type: 0
   PI Protection Mask: NO PROTECTION
   Supported Guard Types: NO GUARD SUPPORT
   DIX Enabled: false
   DIX Guard Type: NO GUARD SUPPORT
   Emulated DIX/DIF Enabled: false

 

More info: https://kb.vmware.com/selfservice/microsites/search.do?language=en_US&cmd=displayKC&externalId=2057513

To check the status of this login to On Command Manager, and check the respective LUN for the volume where the task is being run on and you should begin to notice that the free space increases

 

Login to NetApp OnCommand Manager, choose controller, go to volumes, pick the volume that has had the previous task run on, and click storage efficiency

Ensure 'scan entire volume' is ticked as this will reinitailise the deduplication for that volume. This process will take some time depending on size of volume.  

 

After this is run space savings should now be accurate on both the LUN and volume layer within the NetApp console.  

 
 

 

https://community.netapp.com/t5/Data-ONTAP-Discussions/VAAI-Unmap-delete-status-unsupported-NetApp-FAS-Array-DataOntap-8-2-1/m-p/107028

Public