ONTAP Discussions

Aggregate allocated space & used space

alfonso_tolosa
8,994 Views

Hi Community,

I am monitoring a NetApp system and I've found that "allocated space" and "used space" for one aggregate are different.  I understand that the "used space" could (or should) be bigger than the allocated, but not the opposite.

The output of the commands are:

> aggr show_space aggr0

...

Aggregate                            Allocated                 Used                 Avail
Total space                  1424727976KB    674527236KB     10953228KB
Snap reserve                   75191224KB      15338328KB      59852896KB
WAFL reserve                167091608KB         105020KB    166986588KB

> df -A aggr0

Aggregate                    kbytes              used            avail    capacity
aggr0                   1428633264    1417679976    10953288          99%
aggr0/.snapshot       75191224        15338328    59852896          20%

How can be the allocated space more than the used space ?

Thanks !

6 REPLIES 6

anthonyfeigl
8,995 Views

Hey Alfonso,

I am relatively new to aggregates, but I am wondering off the top of my head if you have a issue with snapshots consuming production space.

If you do a df -k or df -g do you see snapshots taking 100% or more?

Something like this example:

/vol/unknownvol01/ 1468006400KB 1039536716KB 428469684KB      71%  /vol/unknownvol01/
/vol/unknownvol01/.snapshot 367001600KB 354114616KB 12886984KB     96%  /vol/unknownvol01/.snapshot

Anthony Feigl

alfonso_tolosa
8,994 Views

Well, the only snapshot that is over 100% is this one (the rest, both production and snapshot space are under 100%)

/vol/vol2/          104857600KB    24343704KB    80513896KB     23%    /vol/vol2/
snap reserve      26214400KB    30999312KB                0KB    118%    /vol/vol2/..

But if it is consuming part of the production space, I still don't understand why the allocated space should be different (I mean, in the allocated space is included the snap and production data, as well as the metadata needed for managing the volume, so if the snap is consuming part of the production space, the allocated space should be the same, with the only different that I won't have all the production space available for data).

anthonyfeigl
8,994 Views

Alfonso,

Found this in a NetApp doc.  Can't recall which one, but I think it has your answer.

Total allocated space in the aggregate that is shown by aggr  show_space which is the sum of space allocated for all flexible volumes. Used  space in the aggregate shown by df -A is  slightly larger because it also includes some metadata required to maintain the  aggregate

Anthony

alfonso_tolosa
8,995 Views

Thanks for your effort and interest Anthony.


Yes, that's why I say that I understand that the result of df -A would be greater than the output of aggr show_space, but in my case is the opposite:


df -A reports 1417679976 KB

aggr show_space reports 1424727976 KB


I am trying to find what this happens but I have no clue.

chriskranz
8,994 Views

This could easily happen if you have thin provisioned the storage anywhere. You can then over allocate as much space as you like!

Check your volumes to see if any are not set to "space reservation = volume" and also check any LUNs to see if they are space reserved.

There is also the unknown in this. To free up data blocks the filer does a scheduled disk scrub, this is done at the plex level. This will systematically go through each block on the system and check for any references to it, if there are none, then it will scrub (format) the block and it will be free again. If your system is particularly busy, or you have deleted a large amount of data, this can lead to quite a large are of space that is unclaimed! This will get freed up over time. In a normal production system you will always find a discrepency though because of this, the filer simply hasn't had chance to go back through and scrub the blocks free just yet.

alfonso_tolosa
8,994 Views

Thanks for your answer Chris, but I still don't understand the problem why the allocated space is greater than the used.

About volumes and LUNs, every volume is set to "space reservation - volume", and I have no LUNs configured.  About the disk scrub, that will explain if I would have some lost space or if the allocated space would be much lower than the used space.  But the volumes space and the used space are concordant (actually, the addittion of every value of the "df -k" command output is just a bit lower than the used space, but this doesn't include the metadata needed for the volume maintenance, so I see a direct relation between "df -k" and "df -A", but not with "aggr show_space").

If I am not wrong the REAL free space is shown in "df -A", where the addition of used and free space are exactly the usable space for the aggregate.

So my problem is still the difference between the allocated space output for "aggr show_space" and the used space output for "df -A".  Can it happen that they look at the blocks usage in some different way?

Public