I have case opened with NetApp but their explanation of...
this is a cosmetic, presentational blip. Everything is working fine on the backend and those blocks in the snapshots are OK. To counter any confusion with monitoring/reporting, the volume-level snapshot consumption can be manually verified using “df -fs-type snapshot -h”.
...does not sit well with me.
Of my 50+ volumes across 15 aggregates, only one (1) volume exhibits this behavior. Other volumes on the same aggr as the troubled volume ALL have snapshot size values > 0B
My daily snapshots since 20170505 all show 0B in size. Snapshots on that volume from 20170316 to 20170504 show values in the 100s of GB. So clearly something happened between May 4th and 5th.
Has anyone else seen this behavior on such a limited scale - only 1 volume? I get the whole "block reclaimation" explanation, but I find it hard to believe that after 36 days THIS ONE volume is having trouble completing the process when all other volumes (volumes busier, more utilized, etc.) show no 0B snapshots.
Thank you for any feedback you may be able to lend here.
Actual SVM, node, volume and other identifying info have been changed to protect the innocent.... as they say.
Vserver Name: SVM_10 Volume Name: vol_02 Aggregate Name: aggr01_node04 Volume Size: 20.60TB Volume Data Set ID: 1063 Volume Master Data Set ID: 2147484711 Volume State: online Volume Type: RW Volume Style: flex Is Cluster-Mode Volume: true Is Constituent Volume: false Export Policy: default User ID: 0 Group ID: 0 Security Style: unix UNIX Permissions: ---rwxr-xr-x Junction Path: - Junction Path Source: - Junction Active: - Junction Parent Volume: - Comment: Available Size: 2.30TB Filesystem Size: 20.60TB Total User-Visible Size: 20.60TB Used Size: 18.30TB Used Percentage: 88% Volume Nearly Full Threshold Percent: 95% Volume Full Threshold Percent: 98% Maximum Autosize (for flexvols only): 21TB (DEPRECATED)-Autosize Increment (for flexvols only): 512GB Minimum Autosize: 15TB Autosize Grow Threshold Percentage: 90% Autosize Shrink Threshold Percentage: 50% Autosize Mode: grow Autosize Enabled (for flexvols only): true Total Files (for user-visible data): 31876689 Files Used (for user-visible data): 101 Space Guarantee Style: volume Space Guarantee in Effect: true Snapshot Directory Access Enabled: true Space Reserved for Snapshot Copies: 0% Snapshot Reserve Used: 0% Snapshot Policy: none Creation Time: Thu Aug 21 12:59:17 2014 Language: C.UTF-8 Clone Volume: false Node name: NODE_04 NVFAIL Option: on Volume's NVFAIL State: false Force NVFAIL on MetroCluster Switchover: off Is File System Size Fixed: false Extent Option: off Reserved Space for Overwrites: 0B Fractional Reserve: 0% Primary Space Management Strategy: volume_grow Read Reallocation Option: off Inconsistency in the File System: false Is Volume Quiesced (On-Disk): false Is Volume Quiesced (In-Memory): false Volume Contains Shared or Compressed Data: true Space Saved by Storage Efficiency: 2.41TB Percentage Saved by Storage Efficiency: 12% Space Saved by Deduplication: 2.41TB Percentage Saved by Deduplication: 12% Space Shared by Deduplication: 424.7GB Space Saved by Compression: 0B Percentage Space Saved by Compression: 0% Volume Size Used by Snapshot Copies: 10.55TB Block Type: 64-bit Is Volume Moving: false Flash Pool Caching Eligibility: read-write Flash Pool Write Caching Ineligibility Reason: - Managed By Storage Service: - Create Namespace Mirror Constituents For SnapDiff Use: - Constituent Volume Role: - QoS Policy Group Name: _Performance_Monitor_volumes Caching Policy Name: - Is Volume Move in Cutover Phase: false Number of Snapshot Copies in the Volume: 85 VBN_BAD may be present in the active filesystem: false Is Volume on a hybrid aggregate: false Total Physical Used Size: 14.81TB Physical Used Percentage: 72%