ONTAP Discussions
ONTAP Discussions
Where did the space go ? Where to look for it ? The used-space doesn't show it,
but available space is reduced !!
FAS3270, Data ONTAP 8.1.1 7-mode
Is there any thing set (space-reservation?) on ESXi datastores etc causing this issue?
In the following, we expect to see plenty of available space or plenty of used-space to
account for the math, but used+avail is no-way near the Total space for the volume
NAS01> df -h /vol/VMDK_VOL_01/
Filesystem total used avail capacity Mounted on
/vol/VMDK_VOL_01/ 8192GB 157GB 5165GB 37% /vol/VMDK_VOL_01/
/vol/VMDK_VOL_01/.snapshot 0TB 0TB 0TB ---% /vol/VMDK_VOL_01/.snapshot
NAS01> df -h /vol/VMDK_VOL_02/
Filesystem total used avail capacity Mounted on
/vol/VMDK_VOL_02/ 8192GB 105GB 5165GB 37% /vol/VMDK_VOL_02/
/vol/VMDK_VOL_02/.snapshot 0TB 0TB 0TB ---% /vol/VMDK_VOL_02/.snapshot
DeDuplication seems ok:
NAS01> df -sh /vol/VMDK_VOL_01/
Filesystem used saved %saved
/vol/VMDK_VOL_01/ 157GB 218GB 58%
NAS01> df -sh /vol/VMDK_VOL_02/
Filesystem used saved %saved
/vol/VMDK_VOL_02/ 105GB 266GB 72%
NAS01> sis status
Path State Status Progress
/vol/VMDK_VOL_01 Enabled Idle Idle for 10:16:29
/vol/VMDK_VOL_02 Enabled Idle Idle for 10:24:19
NAS01> df -Ah <-- the "used" space here seems almost correct based on similar environment we have - which is fully built, but this ESXi environment on this filer is still getting built.
Aggregate total used avail capacity
aggr_VMDK_01 18TB 13TB 5165GB 73%
aggr_VMDK_01/.snapshot 0KB 856KB 0KB ---%
These large volumes are exported to a newly being-build ESXi environment
SnapShots are neither created in recent time, nor deleted
These are stand-alone volumes (not source/destination etc of SnapMirror, nor SnapVault etc ..)
DeDuplication has been enabled, but its a while ago, and do not see anything out of ordinary
Hi Sridhar,
There is a burt about dedup volumes at 8.1.x about stale metadata. It's very important to say that this burt DO NOT cause loss of data, it doesn't remove metadata about blocks that doesn't exists anymore at the volume.
Runs a "sis start -s /path/to/volume" and when It is done, check the space with "df" command.
All the best,
Rodrigo Nascimento
NetApp - Enjoy it!
Just ran the "sis start -s /vol/VMDK_VOL_01" command you suggested, NO change in the erroneous Capacity as being reported, see below:
NAS01> sis status
Path State Status Progress
/vol/VMDK_VOL_01 Enabled Idle Idle for 00:14:27 <-- just ran again 15 mins ago
NAS01> df -h /vol/VMDK_VOL_01
Filesystem total used avail capacity Mounted on
/vol/VMDK_VOL_01/ 8192GB 98GB 5235GB 36% /vol/VMDK_VOL_01/
/vol/VMDK_VOL_01/.snapshot 0TB 0TB 0TB ---% /vol/VMDK_VOL_01/.snapshot
NAS01> df -sh /vol/VMDK_VOL_01
Filesystem used saved %saved
/vol/VMDK_VOL_01/ 98GB 278GB 74%
Thx
Sri
Sri,
if you use thin provisioning and over provision the aggregate (eg 10TB volume on a 5TB aggregate) the volume will show up 50% used (as this is the space which cannot be guaranteed).
Please post a "df -Ag" and a "aggr show_space -h" for us.
Kind regards
Thomas
Data posted,
send me a "aggr show_space"
thanks,
Rodrigo Nascimento
NAS01> df -Ag
Aggregate total used avail capacity
aggr_SAS_01 19194GB 13985GB 5208GB 73%
aggr_SAS_01/.snapshot 0GB 0GB 0GB ---%
NAS01> aggr show_space -h
Aggregate 'aggr_SAS_01'
Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape
20TB 2132GB 0KB 18TB 0KB 15GB 0KB
Space allocated to volumes in the aggregate
...
Volume Allocated Used Guarantee
VMDK_VOL_01 146GB 101GB none
VMDK_VOL_02 193GB 148GB none
VSWP_VOL_01 40GB 12GB none
...
Aggregate Allocated Used Avail
Total space 13TB 986GB 5208GB
Snap reserve 0KB 856KB 0KB
WAFL reserve 2132GB 192GB 1940GB
Again, df output, wondering why Allocated above for this volumes show very little space ?
NAS01> df -h VMDK_VOL_01
Filesystem total used avail capacity Mounted on
/vol/VMDK_VOL_01/ 8192GB 108GB 5208GB 36% /vol/VMDK_VOL_01/
/vol/VMDK_VOL_01/.snapshot 0TB 0TB 0TB ---% /vol/VMDK_VOL_01/.snapshot
as i said, you thinprovisioned the volumes and overprovisioned the aggregate so the volumes will always be filled to a certain ammount.
Can you post here the list of volumes (df -h) from this aggregate?
but I agree with Thomas, your volume guarantee is none, then you are thinprovisioning and probably overprovisioning your volumes.
All the best,
Rodrigo Nascimento
NetApp - Enjoy it!
To be precise: Hes not overprovisioning the volumes, hes overprovisioning the aggregate as the total size of the volumes does not fit on the Aggregate. It is totaly fine to do as long as the provisioned ammount of LUN space is not bigger than the useable capacity of the aggregate.
Eg:
10TB aggregate, 9TB volumes - fine
10TB aggregate, 20TB volumes - fine (in this case, volumes will be 50% used!)
10TB aggregate, 9TB of LUNs - fine
10TB aggregate, 20TB of LUNs - bad (at least when you´re not 100% sure what you´re doing 😉
Thomas,
I agree again! 😉
I just asking for the "df -h" to explain for him using his own environment. Your examples were perfect!
All the ebst,
Rodrigo Nascimento
NetApp - Enjoy it!