ONTAP Discussions

size in snaphots

miguel_maldonado
4,915 Views

Hello,

This time I have a question about snaplist consumption statistics.

I have a volume: groups

total volume size: 10648 GB
file system space:  9051 GB
snapshot  reserve:  1597 GB

And currently it is filled with this amount of data:

file system space: 6046 GB, equals 67 % of total file system space
snapshot copies:     49 GB, equals  3 % of total snapshot reserve

Now if I execute the snap list volume command I get the last line:


snap list groups

  %/used       %/total  date          name
----------  ----------  ------------  ------
Last line:

1% ( 1%)    0% ( 0%)  Oct 10 23:07  sv_weekly.1  

According to the formula:

The %/used column shows space consumed by Snapshot copies as a percentage of disk space being used in the volume. The first number is cumulative for all Snapshot copies listed so far, and the second number is for the specified Snapshot copy alone.

I am interested in the first number:

%used = (cumulative snapshot copy space)/(comulative snapshot copy space + file system space)

So if it is a percentage of disk space being used, then the values would be like this:

%used= 49/(49+6046)x100 =49/6095x100 = 8 % which defers completely from the 1 % shown.

On the other hand, if I work backwards:

1%=49/(49+X)*100%

Then X would be equals to:

X= 4851 GB, where is this number taken from???

The %/total column shows space consumed by Snapshot copies as a percentage of total disk space (both space used and space available) in the volume.

Again I am interested in the first number:

%total= (cumulative snapshot copy space)/(Total disk space for the volume)

%total= 49/10648 = 0.46% which is an aproximation of 0 % shown

From the manuals:

Summary of the snap list command output: The %/used number is useful for planning the Snapshot copy reserve because it is more likely to remain constant as the file system fills.

I have another volume for another example:

total vol size: 212 GB

file system:   159 GB

snapshot:       53 GB

And the space being used is

file system: 67 GB

snapshot:    15 GB

and the snaplist command

the snaplist

24% ( 0%)    7% ( 0%)  Jul 14 06:21  exchsnap__cuenca1_07-14-2010_06.04.01__daily

So if you do the math, the numbers don`t match either for the first equation.

We would appreciate if someone out there who understands this equations could explained them to us.

Thank you very much for your help,

Miguel

4 REPLIES 4

Darkstar
4,915 Views

49gb of 6095gb is about 0.8% which is rounded to 1% and so the figures seem correct to me

-Michael

miguel_maldonado
4,915 Views

Hello,

You are right I made a mistake, it is aproximate to 1 %, however if you see my second example,:

15/(15+67) =18.29 which is different from 24 %

Also I have another example of a snapvault destination:

df -h
Filesystem               total       used      avail capacity  Mounted on

/vol/groups/            8437GB     8088GB      349GB      96%  /vol/groups/
/vol/groups/.snapshot     2109GB     4148GB        0GB     197%  /vol/groups/.snapshot

filerb1> aggr show_space -g

Aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS
        14895GB          1489GB           134GB         13271GB             0GB            13GB

Space allocated to volumes in the aggregate

groups                            10256GB         10208GB            file

filerb1> snap list groups
Volume groups
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------

54% ( 4%)   69% ( 2%)  May 10 00:33  sv_weekly.22

54=/ 4148/(4148+8088)=33.90

or the total size:

69=/ 4148/(2109+8437)=39.33

On this last example the volume has space guarantee "file"

Darkstar
4,915 Views

Hmm.. I get the following (with your figures), assuming the snapshot you listed is the last (and thus largest) on the volume:

%/used:

4148 / 8088 = 51%

(snapshot used is 4148 gb, used volume space is 8088 gb)

I guess the "difference" between 51% and 53% is that some blocks are locked in more than one snapshot and thus are counted differently.

you could try "snap delta" to get more detailed info of changed blocks between 2 snapshots.

I, too, don't get the 69%, but I guess it *might* have something to do with the fact that your snapshots are "spilling" over into the live volume.

On the other hand, the output of "snap list" is *not* authorative as there are several bugs in OnTAP which might lead to wrong values there (e.g. BUG 226848 or 347779)

-Michael

miguel_maldonado
4,915 Views

Thank you again Michael,

One note, the formula, according to the manual is ( comulative snapshot copy space /( comulative snapshot copy space + filesystem space)

so I believe it would be 4148/(4148+8088) = 33.89

Another strange thing, if I input  the vol size command, I get:

vol size: Flexible volume 'groups' has size 11059540788k

which is 10547 GB, but not the total amount of data used: 4148 + 8088 = 12236 GB

and options fo the volume:

filerb1> vol status groups -v

         Volume State           Status            Options                    

         groups online          raid_dp, flex     nosnap=off, nosnapdir=off, 

                                                  minra=off, no_atime_update=off,

                                                  nvfail=off,

                                                  ignore_inconsistent=off,

                                                  snapmirrored=off,

                                                  create_ucode=on,

                                                  convert_ucode=off,

                                                  maxdirsize=18350,

                                                  schedsnapname=ordinal,

                                                  fs_size_fixed=off,

                                                  compression=off, guarantee=file,

                                                  svo_enable=off, svo_checksum=off,

                                                  svo_allow_rman=off,

                                                  svo_reject_errors=off,

                                                  no_i2p=off,

                                                  fractional_reserve=100,

                                                  extent=off,

                                                  try_first=volume_grow,

                                                  read_realloc=off,

                                                  snapshot_clone_dependency=off

                Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active

                    RAID group /aggr0/plex0/rg0: normal

                    RAID group /aggr0/plex0/rg1: normal

        Snapshot autodelete settings for groups:

                                        state=off

                                        commitment=try

                                        trigger=volume

                                        target_free_space=20%

                                        delete_order=oldest_first

                                        defer_delete=user_created

                                        prefix=(not specified)

                                        destroy_list=none

        Volume autosize settings:

                                        state=off

So according to these data:

If the space guarantee of the volume is set to "file", then Data ONTAP will allways ensure that rewrites to files inside a volume will succeed. And also, if the autogrow option is set to "off", then the volume cannot grow. So how come there is more data than the volume can hold???

12236-10547= 1689 GB

thank you

Miguel

Public