ONTAP Discussions
ONTAP Discussions
Hello,
We encounter a strange issue on 8.2.4 7 mode.
An aggregate is filled with more than 3 TB of datas,
Aggregate kbytes used avail capacity
aggr0 21055590144 3609363136 17446227008 17%
aggr0/.snapshot 0 0 0 0%
and however there are only 2 volumes in that aggregate of around 250 Go:
Filesystem kbytes used avail capacity Mounted on
/vol/volboot/ 249036800 6923172 242113628 3% /vol/volboot/
/vol/volboot/.snapshot 13107200 112104 12995096 1% /vol/volboot/.snapshot
/vol/volpraboot/ 249036800 4945480 244091320 2% /vol/volpraboot/
/vol/volpraboot/.snapshot 13107200 31516 13075684 0% /vol/volpraboot/.snapshot
We can see that this amount of space is consumed of footprint:
Aggregate : aggr0
Feature Used Used%
-------------------------------- ---------------- -----
Volume Footprints 3.36TB 17%
Aggregate Metadata 1.85MB 0%
Total Used 3.36TB 17%
For information, this aggregate was build with disks from a new shelf, then vol boot was moved into that new shelf (ndmpcopy), filer restarted, and then original aggr0 was deleted.
Should the volume footprints of aggr0 be the sum of each volume footprint ? It doesn't seem to be the case as individual volume foot print do not represent 3 TB
Is there a way to reduce the size of these footprint ?
Many thanks for your help,
Regards
Solved! See The Solution
Yep, some SnapMirror destination volumes are deliberatly created large to avoid the issue when the source volume is autosized. Unless the SnapMirror is broken and the fs_size_fixed option is turned off it will not actually consume this space.
Hello,
The volume footprint will include the volume size as well as any metadata. From Storage Management Guide: https://library.netapp.com/ecmdocs/ECMP1368859/html/GUID-77834FDB-81FE-4FD2-BB0F-3DF9390F3197.html
However, 3TB does sound excessive for metadata. Your DF output will however only show online volumes. While it sounds like this is a new aggregate, do you have any other volumes not online that could account for the difference?
Thanks,
Grant.
Hi Grant,
Thanks for your feedback.
I checked but no: no offline volume present.
Although no snap of any kind.
Regards,
If you could supply the output of the following commands we should see the offending volume:
vol status -S
vol status -F
Let me know.
Thanks,
Grant.
Thanks a lot Grant for pointing me to a good thing, here is the output:
vol status -S
Volume : volboot
Feature Used Used%
-------------------------------- ---------------- -----
User Data 6.58GB 3%
Filesystem Metadata 2.51MB 0%
Inodes 3.23MB 0%
Snapshot Reserve 12.5GB 5%
Total 19.0GB 8%
Volume : volpraboot
Feature Used Used%
-------------------------------- ---------------- -----
User Data 4.66GB 2%
Filesystem Metadata 2.17MB 0%
Inodes 3.71MB 0%
Snapshot Reserve 12.5GB 5%
Total 17.1GB 7%
and vol status -F
Volume : volboot
Feature Used Used%
-------------------------------- ---------------- -----
Volume Data Footprint 2.66GB 0%
Volume Guarantee 244GB 1%
Flexible Volume Metadata 1.38GB 0%
Delayed Frees 2.42GB 0%
Total 251GB 1%
Volume : volpraboot
Feature Used Used%
-------------------------------- ---------------- -----
Volume Data Footprint 704MB 0%
Volume Guarantee 3.09TB 16%
Flexible Volume Metadata 17.6GB 0%
Total 3.11TB 16%
There is a guarantee fixed on volpraboot, which is a snapmirror target. I'll do some tests and let you know.
Many thanks!
Yep, some SnapMirror destination volumes are deliberatly created large to avoid the issue when the source volume is autosized. Unless the SnapMirror is broken and the fs_size_fixed option is turned off it will not actually consume this space.
Hi Grant,
Many thanks for your help, we'll review this relation.
Regards