ONTAP Discussions

0B Snapshots for 30+ Consecutive Days

nicholsongc
6,916 Views

I have  case opened with NetApp but their explanation of...

 

this is a cosmetic, presentational blip. Everything is working fine on the backend and those blocks in the snapshots are OK. To counter any confusion with monitoring/reporting, the volume-level snapshot consumption can be manually verified using “df -fs-type snapshot -h”.

 

...does not sit well with me.

 

Of my 50+ volumes across 15 aggregates, only one (1) volume exhibits this behavior.  Other volumes on the same aggr as the troubled volume ALL have snapshot size values > 0B

 

My daily snapshots since 20170505 all show 0B in size.  Snapshots on that volume from 20170316 to 20170504 show values in the 100s of GB.  So clearly something happened between May 4th and 5th.

 

Has anyone else seen this behavior on such a limited scale - only 1 volume?  I get the whole "block reclaimation" explanation, but I find it hard to believe that after 36 days THIS ONE volume is having trouble completing the process when all other volumes (volumes busier, more utilized, etc.) show no 0B snapshots.

 

Thank you for any feedback you may be able to lend here.

 

Snapshot list:

(Abbreviated for your protection)

                                                             ---Blocks---
Vserver  Volume   Snapshot                           Size    Total% Used%
-------- -------- ---------------------------------- ------- ------ -----
SVM_01   vol_02
                  smvi_Daily_novmsnap_20170316030002 187.2GB      1% 4%
                  smvi_Daily_novmsnap_20170317030002 83.80GB      0% 2%
                  smvi_Daily_novmsnap_20170318030001 79.50GB      0% 2%
                  smvi_Daily_novmsnap_20170319030002 94.93GB      0% 2%
                  smvi_Daily_novmsnap_20170320030002 76.66GB      0% 2%
                  smvi_Daily_novmsnap_20170321030001 101.6GB      0% 2%
                  smvi_Daily_novmsnap_20170322030002 111.7GB      1% 2%

                                          ...

                                          ...
                  smvi_Daily_novmsnap_20170429030002 87.34GB      0% 2%
                  smvi_Daily_novmsnap_20170430030003 79.46GB      0% 2%
                  smvi_Daily_novmsnap_20170501030002 88.73GB      0% 2%
                  smvi_Daily_novmsnap_20170502030002 159.5GB      1% 4%
                  smvi_Daily_novmsnap_20170503030002 77.27GB      0% 2%
                  smvi_Daily_novmsnap_20170504030002 80.65GB      0% 2%
                  smvi_Daily_novmsnap_20170505030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170506030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170507030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170508030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170509030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170510030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170511030001      0B      0% 0%
                  smvi_Daily_novmsnap_20170512030003      0B      0% 0%
                  smvi_Daily_novmsnap_20170513030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170514030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170515030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170516030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170517030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170518030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170519030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170520030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170521030003      0B      0% 0%
                  smvi_Daily_novmsnap_20170522030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170523030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170524030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170525030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170526030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170527030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170528030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170529030001      0B      0% 0%
                  smvi_Daily_novmsnap_20170530030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170531030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170601030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170602030001      0B      0% 0%
                  smvi_Daily_novmsnap_20170603030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170604030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170605030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170606030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170607030002      0B      0% 0%
                  smvi_Daily_novmsnap_20170608030001      0B      0% 0%

85 entries were displayed.

 

1 ACCEPTED SOLUTION

nicholsongc
6,839 Views

So it appears WAFL scan block reclaim has run since 5/5/2017...  that makes perfect sense.  So the question now is WHY?  But that's for support to dig into.


For now I can manually kick off  

 

node run -node <node_name> wafl scan ownblocks_calc <volume_name>

 

to keep the snapshot size #s good (and management off my back when they scream "We have NO backups??!"

View solution in original post

8 REPLIES 8

robinpeter
6,885 Views

Do you mind posting the output of following command.

 

::> vol show -vserver SVM_01 -volume vol_02 -instance

nicholsongc
6,881 Views

Actual SVM, node, volume and other identifying info have been changed to protect the innocent.... as they say.

 

Vserver Name: SVM_10
Volume Name: vol_02
Aggregate Name: aggr01_node04
Volume Size: 20.60TB
Volume Data Set ID: 1063
Volume Master Data Set ID: 2147484711
Volume State: online
Volume Type: RW
Volume Style: flex
Is Cluster-Mode Volume: true
Is Constituent Volume: false
Export Policy: default
User ID: 0
Group ID: 0
Security Style: unix
UNIX Permissions: ---rwxr-xr-x
Junction Path: -
Junction Path Source: -
Junction Active: -
Junction Parent Volume: -
Comment:
Available Size: 2.30TB
Filesystem Size: 20.60TB
Total User-Visible Size: 20.60TB
Used Size: 18.30TB
Used Percentage: 88%
Volume Nearly Full Threshold Percent: 95%
Volume Full Threshold Percent: 98%
Maximum Autosize (for flexvols only): 21TB
(DEPRECATED)-Autosize Increment (for flexvols only): 512GB
Minimum Autosize: 15TB
Autosize Grow Threshold Percentage: 90%
Autosize Shrink Threshold Percentage: 50%
Autosize Mode: grow
Autosize Enabled (for flexvols only): true
Total Files (for user-visible data): 31876689
Files Used (for user-visible data): 101
Space Guarantee Style: volume
Space Guarantee in Effect: true
Snapshot Directory Access Enabled: true
Space Reserved for Snapshot Copies: 0%
Snapshot Reserve Used: 0%
Snapshot Policy: none
Creation Time: Thu Aug 21 12:59:17 2014
Language: C.UTF-8
Clone Volume: false
Node name: NODE_04
NVFAIL Option: on
Volume's NVFAIL State: false
Force NVFAIL on MetroCluster Switchover: off
Is File System Size Fixed: false
Extent Option: off
Reserved Space for Overwrites: 0B
Fractional Reserve: 0%
Primary Space Management Strategy: volume_grow
Read Reallocation Option: off
Inconsistency in the File System: false
Is Volume Quiesced (On-Disk): false
Is Volume Quiesced (In-Memory): false
Volume Contains Shared or Compressed Data: true
Space Saved by Storage Efficiency: 2.41TB
Percentage Saved by Storage Efficiency: 12%
Space Saved by Deduplication: 2.41TB
Percentage Saved by Deduplication: 12%
Space Shared by Deduplication: 424.7GB
Space Saved by Compression: 0B
Percentage Space Saved by Compression: 0%
Volume Size Used by Snapshot Copies: 10.55TB
Block Type: 64-bit
Is Volume Moving: false
Flash Pool Caching Eligibility: read-write
Flash Pool Write Caching Ineligibility Reason: -
Managed By Storage Service: -
Create Namespace Mirror Constituents For SnapDiff Use: -
Constituent Volume Role: -
QoS Policy Group Name: _Performance_Monitor_volumes
Caching Policy Name: -
Is Volume Move in Cutover Phase: false
Number of Snapshot Copies in the Volume: 85
VBN_BAD may be present in the active filesystem: false
Is Volume on a hybrid aggregate: false
Total Physical Used Size: 14.81TB
Physical Used Percentage: 72%

nicholsongc
6,840 Views

So it appears WAFL scan block reclaim has run since 5/5/2017...  that makes perfect sense.  So the question now is WHY?  But that's for support to dig into.


For now I can manually kick off  

 

node run -node <node_name> wafl scan ownblocks_calc <volume_name>

 

to keep the snapshot size #s good (and management off my back when they scream "We have NO backups??!"

robinpeter
6,822 Views

"...and management off my back when they scream "We have NO backups??!" "

 

Smiley Very Happy

 

Glad you find some soltuion to this..

 

And thank You for posting the workaround. Appreciate it.

silvioinacio
6,654 Views

Please, what's your version of cDOT?

nicholsongc
6,646 Views

This thread has a RESOLUTION.

But since you asked...  The affected system is running NetApp Release 8.3.2P9

silvioinacio
6,635 Views

Thanks... We are running 8.3.0 and a volume presented same trouble few minutes ago. Can you post the resolution?

nicholsongc
6,632 Views

Just scroll up this thread about four (4) posts....

Public