ONTAP Hardware
ONTAP Hardware
Hello,
We're seem to be missing around 3.8TB capacity on our volume
df -g output:
Filesystem total used avail capacity Mounted on
/vol/nas_ist_wdtd_01/ 4096GB 168GB 309GB 92% /vol/nas_ist_wdtd_01/
/vol/nas_ist_wdtd_01/.snapshot 0GB 3GB 0GB ---% /vol/nas_ist_wdtd_01/.snapshot
Notice that the used capacity is only 168GB on a 4TB NFS volume. But the usage capacity is already 92% !!!
The host where this NFS volume is presented to also show used space of 92%
stgc308vf001:CDF 3727.36 307.60 92% 0 0% /mnt/insightshare/cdf
stgc308vf001:training$ 3727.36 307.60 92% 0 0% /mnt/insightshare/training
I've tried running sis start -s on this volume and also reset sis <vol> but didn't make any difference (as per the known dedupe bug)
I've also already made the snap reserve = 0
vol options output:
nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=41861, schedsnapname=ordinal,
fs_size_fixed=off, guarantee=none, svo_enable=off, svo_checksum=off,
svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,
fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off, dlog_hole_reserve=off,
nbu_archival_snap=off
Netapp version is: NetApp Release 8.1.4P8 7-Mode:
Note that this volume was originally setup with guarantee=file and fractional_reserve=100. I changed this to guarantee=none and fractional_reserve=0 but with no effect.
Hoping for your kind assistance.
regards,
jmpal
Solved! See The Solution
Additional info on snap:
Previously 9% (approx 300GB+) but I changed this to 0%
snap reserve -V
before:
Volume nas_ist_wdtd_01: current snapshot reserve is 9% or 386547056 k-bytes. <<< 385GB
after:
Volume nas_ist_wdtd_01: current snapshot reserve is 0% or 0 k-bytes.
There are 2 system snaps (snapmirror) but both are very small...
snap reclaimable nas_ist_wdtd_01 stgs307fas550(0151741186)_nas_ist_wdtr_01.622
Processing (Press Ctrl-C to exit) ..............................................................................................................................................................................................................................................................................................................................................................................
snap reclaimable: Approximately 3634292 Kbytes would be freed..
snap reclaimable nas_ist_wdtd_01 snapshot_for_backup.5607
Processing (Press Ctrl-C to exit) ..............................................................................................................................................................................................................................................................................................................................................................................
snap reclaimable: Approximately 1528 Kbytes would be freed.
thanks,
jmp
Hi, interesting!
Could you supply the output of the folliowing for the nas_ist_wdtd_01 volume and hosting aggregate, as well as the SnapMirror destination:
vol size
df -h
df -h -S
df -h -A
aggr show_space -h
FYI...the volume gurantee setting would not affect the used space in the volume, but rather in the aggregate. Also, the fractional_reserve will only affect SAN volumes. Just to explain why you've seen no change in the volume used space.
Have you raised a Support case for this yet?
Thanks.
Hi sgrant,
Thank you for your reply.
I just noticed overnight that the amount of volume space used has gone down from 92% to 60% although the used space is still the same (165GB). The only thing I can think of is that the reset sis or sis start -s (I ran both) were doing backuground cleanup.
Still the capacity used (60%) does not reflect the used capacity of 165GB out of 4TB.
Also, last night we were trying to cleanup and vmotion servers away from aggr1 to aggr0 as it was hitting 93-95%.
I don't have a copy of the df -Ag from last night, but it was approx aggr0 37% , aggr1 93%. ATM, it is aggr0 57%, aggr1 63%.
Here are the outputs from this morning:
vol size
vol size: Flexible volume 'nas_ist_wdtd_01' has size 4t.
df -h
Filesystem total used avail capacity Mounted on
/vol/nas_ist_wdtd_01/ 4096GB 165GB 1643GB 60% /vol/nas_ist_wdtd_01/
/vol/nas_ist_wdtd_01/.snapshot 0MB 264MB 0MB ---% /vol/nas_ist_wdtd_01/.snapshot
df -h -S
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
/vol/nas_ist_wdtd_01/ 165GB 133GB 45% 133GB 45% 0GB 0%
df -h -A
Aggregate total used avail capacity
aggr0 7347GB 4203GB 3143GB 57%
aggr0/.snapshot 0TB 0TB 0TB ---%
aggr1 4408GB 2764GB 1643GB 63%
aggr1/.snapshot 0TB 0TB 0TB ---%
aggr show_space -h
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape
4898GB 489GB 0KB 4408GB 0KB 15GB 0KB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
nas_ist_wdtd_01 189GB 169GB none
vfvol1 100GB 361MB volume
nas_xca02_ha_0001 12MB 8892KB none
nas_xca02_wos_0001 150GB 149GB none
nas_xca02_trt_0001 100GB 98GB none
n2_n0536_eci02_orasb01 430GB 426GB none
n2_n0536_eci02_03 1776GB 1760GB none
Aggregate Allocated Used Avail
Total space 2749GB 2603GB 1643GB
Snap reserve 0KB 0KB 0KB
WAFL reserve 489GB 51GB 438GB
Snapmirror Destination:
vol size
vol size: Flexible volume 'nas_ist_wdtd_01' has size 4t.
df -h
/vol/nas_ist_wdtd_01/ 3891GB 10GB 1320GB 66% /vol/nas_ist_wdtd_01/
/vol/nas_ist_wdtd_01/.snapshot 204GB 0TB 204GB 0% /vol/nas_ist_wdtd_01/.snapshot
df -h -S
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
/vol/nas_ist_wdtd_01/ 10GB 3459MB 25% 3459MB 25% 0MB 0%
df -h -A
Aggregate total used avail capacity
aggr0 8082GB 5772GB 2309GB 71%
aggr0/.snapshot 0TB 0TB 0TB ---%
aggr1 8816GB 7496GB 1320GB 85%
aggr1/.snapshot 0TB 0TB 0TB ---%
aggr show_space -h
Aggregate 'aggr1'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape
9796GB 979GB 0KB 8816GB 0KB 76GB 0KB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
vfvol1 101GB 1362MB volume
nas_ist_wdtd_01 40GB 16GB file
nas_ist_wdtr_01 192GB 171GB none
fc_bakprdinfl015_data 291GB 290GB none
nas_inf_sdtd_03 1047GB 1040GB none
nas_xsa02_ha_0001 14MB 10MB none
nas_xsa02_wos_0001 454GB 452GB none
nas_xsa02_trt_0001 162GB 160GB none
fc_bakprdinfl011_data_r2 854GB 619GB volume
nas_inf_sdtd_04 1317GB 1310GB none
nas_esi02_adtd_01 1394GB 1386GB file
nas_esi02_adtd_02 1379GB 1372GB file
fc_bakprdinfl015_nsr_temp 183GB 182GB none
Aggregate Allocated Used Avail
Total space 7419GB 7004GB 1320GB
Snap reserve 0KB 0KB 0KB
WAFL reserve 979GB 104GB 875GB
I can no longer open a ticket for this array as this is old (FAS3140) and currently no longer have maintenance as we are planning to migrate the data to a newer filer.
Thanks,
jmp
Hi aborzenkov,
Thank you for pointing this out. This answers the question of the missing capacity. It can only show what the max free space the aggregate can offer.
Appreciate your help!!!
regards,
jmp