DEDUPLICATION CONSIDERATIONS WITH VMFS AND RDM LUNS
Enabling deduplication when provisioning LUNs produces storage savings. However, the default behavior of a LUN is to reserve an amount of storage equal to the provisioned LUN. This design means that although the storage array reduces the amount of capacity consumed, any gains made with deduplication are for the most part unrecognizable, because the space reserved for LUNs is not reduced. To recognize the storage savings of deduplication with LUNs, you must enable NetApp LUN thin provisioning.
Note: Although deduplication reduces the amount of consumed storage, the VMware administrative team does not see this benefit directly, because its view of the storage is at a LUN layer, and LUNs always represent their provisioned capacity, whether they are traditional or thin provisioned. The NetApp Virtual Storage Console (VSC) provides the VI administrator with the storage use at all layers in the storage stack.
When you enable dedupe on thin-provisioned LUNs, NetApp recommends deploying these LUNs in FlexVol volumes that are also thin provisioned with a capacity that is 2x the size of the LUN. When the LUN is deployed in this manner, the FlexVol volume acts merely as a quota. The storage consumed by the LUN is reported in FlexVol and its containing aggregate.
> If I go to System Manager or VSC, the LUN free space is 90%, nonetheless.
I could only find out "% Used" column in System Manager. Do you mean "% Used" is still 90% even after you have deleted files on DataStore? Used space at LUN level does not reflect usage of filesystem level. Once actual block is allocated from aggregate, its is counded as "used" and is never freed even after removing files from VMFS. Of cource, this "used" blocks are reused by VMFS.
Usage statistics at LUN is useless for users and admins, so ignore difference between "df" on ESXi and LUN used space on ONTAP.
> I could only find out "% Used" column in System Manager. Do you mean "% Used" is still 90% even after you have deleted files on DataStore?
Yes, that's what I'd like to mean.
> Used space at LUN level does not reflect usage of filesystem level. Once actual block is allocated from aggregate, its is counded as "used" and is never freed even after removing files from VMFS. Of cource, this > "used" blocks are reused by VMFS. > Usage statistics at LUN is useless for users and admins, so ignore difference between "df" on ESXi and LUN used space on ONTAP.
I deleted the VM machines and afterward, I ran an unmap command over the Datastore to give back the freed blocks to the LUN. I thought that Storage Array takes advantage of VAII and VASA to solve these kind of issues.
I found a similar issue, are you sure that the unmap command has actually ran? Is delete status supported? To check run "esxcli storage core device vaai status get". If delete status is unsupported run "lun set space_allo <lun path> enable". I removed a 6TB vmdk and 6 weeks later i'm still having to reissue the unmap command from the vm host. Apparently there is a bug in 7-mode which has been made publicly known yet which causes the unmap command to time out (it took me 6 weeks of speaking to support to find this out). You can see if it's making any progress by running esxtop, select u for disk device view then f, o and p. Take a look at the MBDEL/s column, if there's anything in there unmap is still running (very slowly in my case), if not issue another unmap and see if anything appears.
I had the same issue and we use FC LUNS, presented the same way like how you do it. I did have to run specific flags
In order for these features of VAAI Thin Provisioning to work as expected, the LUN must have space allocation enabled. This is NOT the default in any version of Data ONTAP. For Data ONTAP7-Mode, run the lun set space_alloc <lun path> enabled command.
Firstly we need to find out the Lun path for each volume do the following (ensure this is run on each controller):
e.g. Lun show command
To set it would be:
lun set space_alloc /vol/flexvolname/lun_name enable.
To check whether it is enabled type lun set space_alloc /vol/volume/lun_name
You also need to check on the ESXi host side whether it setup correctly.
. Logon to each host CLI uisng putty and type the following:
If int Value is set to 0, this needs to be enabled
esxcli system settings advanced list --option /VMFS3/EnableBlockDelete
Int Value: 0 <<<<<<<<<< 0 means Disabled Default Int Value: 1
Min Value: 0
Max Value: 1
Default String Value:
Description: Enable VMFS block delete
Type esxcli system settings advanced set --int-value 1 --option /VMFS3/EnableBlockDelete to enable.
Then type 'esxcli storage vmfs extent list' to list each datastore (across both hosts) and the associated UUID and naa number
Pick any naa for a volume which needs the task run on.
Using NetApp_Prf2_ESX_W2k3_Sas_01 with naa.60a9800038303045525d4559446d2d36 as an example
type esxcli storage core device vaai status get -d naa.60a9800038303045525d4559446d2d36
You should see delete status say 'supported'. if this say unsupported it means one of the previous steps hasn't been performed properly.
You can run another command to check a more detailed status:
4. esxcli storage core device list -d naa.60a9800038303045525d4559446d2d36
Display Name: NETAPP Fibre Channel Disk (naa.60a9800038303045525d4559446d2d36) Has Settable Display Name: true Size: 2621563 Device Type: Direct-Access Multipath Plugin: NMP Devfs Path: /vmfs/devices/disks/naa.60a9800038303045525d4559446d2d36 Vendor: NETAPP Model: LUN Revision: 820a SCSI Level: 5 Is Pseudo: false Status: on Is RDM Capable: true Is Local: false Is Removable: false Is SSD: false Is VVOL PE: false Is Offline: false Is Perennially Reserved: false Queue Full Sample Size: 0 Queue Full Threshold: 0 Thin Provisioning Status: yes Attached Filters: VAAI_FILTER VAAI Status: supported Other UIDs: vml.020037000060a9800038303045525d4559446d2d364c554e202020 Is Shared Clusterwide: true Is Local SAS Device: false Is SAS: false Is USB: false Is Boot USB Device: false Is Boot Device: false Device Max Queue Depth: 64 No of outstanding IOs with competing worlds: 32 Drive Type: unknown RAID Level: unknown Number of Physical Drives: unknown Protection Enabled: false PI Activated: false PI Type: 0 PI Protection Mask: NO PROTECTION Supported Guard Types: NO GUARD SUPPORT DIX Enabled: false DIX Guard Type: NO GUARD SUPPORT Emulated DIX/DIF Enabled: false