2016-10-11 03:04 PM - edited 2016-10-11 05:24 PM
We have come up against the 16TB size lmit for a LUN in Ontap 8.3.1. The LUN is on a volume that has been grown to 23.04 TB, yet the LUN is only 16 TB. There are no other LUNs on the volume, yet the vol is 100% full. What is using this extra space on the volume? What is "Reserved Space for Overwrites" and why is it 6.45TB?
There are no snapshots on the volume and a 5% snapshot reserve.
How can we reclaim this 7+TB of space on the volume?
lun show: /vol/VMDK_01/VMDK_01 online mapped vmware 15.97TB vol show: VMDK_01 netapp_clr301_01_aggr1 online RW 23.04TB 0B 100% vol show detail: netapp-clr301::> vol show -vserver netapp-iscsi301 -volume VMDK_01 Vserver Name: netapp-iscsi301 Volume Name: VMDK_01 Aggregate Name: netapp_clr301_01_aggr1 Volume Size: 23.04TB Volume Data Set ID: 1100 Volume Master Data Set ID: 2147484748 Volume State: online Volume Type: RW Volume Style: flex Is Cluster-Mode Volume: true Is Constituent Volume: false Export Policy: default User ID: 0 Group ID: 0 Security Style: unix UNIX Permissions: ---rwxr-xr-x Junction Path: - Junction Path Source: - Junction Active: - Junction Parent Volume: - Comment: Available Size: 0B Filesystem Size: 23.04TB Total User-Visible Size: 21.89TB Used Size: 21.89TB Used Percentage: 100% Volume Nearly Full Threshold Percent: 95% Volume Full Threshold Percent: 98% Maximum Autosize (for flexvols only): 30TB (DEPRECATED)-Autosize Increment (for flexvols only): 1GB Minimum Autosize: 23.04TB Autosize Grow Threshold Percentage: 98% Autosize Shrink Threshold Percentage: 50% Autosize Mode: off Autosize Enabled (for flexvols only): false Total Files (for user-visible data): 31876689 Files Used (for user-visible data): 101 Space Guarantee Style: volume Space Guarantee in Effect: true Snapshot Directory Access Enabled: true Space Reserved for Snapshot Copies: 5% Snapshot Reserve Used: 0% Snapshot Policy: none Creation Time: Fri May 20 14:22:19 2016 Language: C.UTF-8 Clone Volume: false Node name: netapp-clr301-01 NVFAIL Option: on Volume's NVFAIL State: false Force NVFAIL on MetroCluster Switchover: off Is File System Size Fixed: false Extent Option: off Reserved Space for Overwrites: 6.45TB Fractional Reserve: 100% Primary Space Management Strategy: volume_grow Read Reallocation Option: off Inconsistency in the File System: false Is Volume Quiesced (On-Disk): false Is Volume Quiesced (In-Memory): false Volume Contains Shared or Compressed Data: true Space Saved by Storage Efficiency: 660.4GB Percentage Saved by Storage Efficiency: 3% Space Saved by Deduplication: 660.4GB Percentage Saved by Deduplication: 3% Space Shared by Deduplication: 72.02GB Space Saved by Compression: 0B Percentage Space Saved by Compression: 0% Volume Size Used by Snapshot Copies: 0B Block Type: 64-bit Is Volume Moving: false Flash Pool Caching Eligibility: read-write Flash Pool Write Caching Ineligibility Reason: - Managed By Storage Service: - Create Namespace Mirror Constituents For SnapDiff Use: - Constituent Volume Role: - QoS Policy Group Name: - Caching Policy Name: - Is Volume Move in Cutover Phase: false Number of Snapshot Copies in the Volume: 0 VBN_BAD may be present in the active filesystem: false Is Volume on a hybrid aggregate: false Total Physical Used Size: 5.87TB Physical Used Percentage: 25%
Solved! SEE THE SOLUTION
2016-10-11 11:44 PM - edited 2016-10-11 11:45 PM
There are no snapshots on the volume
You have deduplication enabled and deduplication internally works with snapshots
wsanderstii wrote:netapp-clr301::> vol show -vserver netapp-iscsi301 -volume VMDK_01 ...
Reserved Space for Overwrites: 6.45TB Fractional Reserve: 100%
Set fractional reserve to 0, this should free reserved space. Please note, this may result in out of space condition during writes to LUN.
2016-10-12 11:28 AM
Yes, after setting factional reserve to 0 the space was immediately freed up. I guess after working with netapps for 20+ years I still don't understand factional reserve. Perhaps no one does :-)
The unanswered question is why the factional reserve claimed 23+ TB for the volume when it only has one LUN on it, reservation is *on* for the LUN, so the vol should only need 16TB for the LUN plus some space (6+ TB?!?) for metadata and deduplication. (The LUN is 99% full but reservation should make that irrelevant.) "vol show-footprint" and "vol show-space" didn't hint at that.
I am still mystified why "show footprint" showed this
netapp-clr301::> vol show-footprint -volume VMDK_01 Vserver : netapp-iscsi301 Volume : VMDK_01 Feature Used Used% -------------------------------- ---------- ----- Volume Data Footprint 5.88TB 10% Volume Guarantee 16.93TB 30% Flexible Volume Metadata 131.1GB 0% Deduplication 8KB 0% Delayed Frees 235.9GB 0% Total Footprint 23.17TB 41%
when the 99% full 16TB LUN on the volume, with reservation on, is the only thing on the volume.
2016-10-12 11:12 PM
I guess after working with netapps for 20+ years I still don't understand factional reserve. Perhaps no one does :-)
LUN space management (including fractional reserve) is described pretty extensively in TR-3483. I am surpsised you never came around to reading it in 20+ years. If I am mistaken and you did - do you have specific question about content of this TR?
2016-10-13 09:24 AM - edited 2016-10-13 09:25 AM
Thanks- that was a failed attempt at humor. I can say I don't recall ever deliberately turning fractional reserve on on ever. There are only a few use cases in TR-3483 where its use is mentioned.