ONTAP Discussions

volume capacity issue

ol15
10,008 Views

hello,

 

on 7-Mode: 8.2.3P6 FAS2552:

 

i do not undestand from where the 88% come from with this df command on my volume:

can you explain me why i have 88% capacity


df -g /vol/vol_test/
Filesystem               total       used      avail capacity  Mounted on
/vol/vol_test/     8394GB     4862GB     1087GB      87%  /vol/vol_test/
/vol/vol_test/.snapshot        0GB        0GB        0GB       0%  /vol/vol_test/.snapshot

 vol status -S vol_test
Volume : vol_test

      Feature                                           Used      Used%
      --------------------------------      ----------------      -----
      User Data                                       4.74TB        58%
      Filesystem Metadata                             4.39GB         0%
      Inodes                                          24.0KB         0%
      Deduplication                                   12.0KB         0%
      Snapshot Spill                                  9.23MB         0%

      Total                                           4.74TB        58%

 

 

aggr show_space -g
aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape
         7532GB           753GB             0GB          6778GB             0GB             2GB             0GB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol1            0GB             0GB            none
vol2             0GB             0GB            none
vol3             0GB             0GB            none
vol_test          4962GB          4943GB            none
vol0                                718GB             5GB          volume


Aggregate                       Allocated            Used           Avail
Total space                        5688GB          4955GB          1087GB
Snap reserve                          0GB           209GB             0GB
WAFL reserve                        753GB            77GB           675GB

 

 

 vol status -v vol_test
         Volume State           Status                Options
vol_test online          raid_dp, flex         nosnap=off, nosnapdir=off, minra=off,
                                sis                   no_atime_update=on, nvfail=off,
                                64-bit                ignore_inconsistent=off, snapmirrored=off,
                                                      create_ucode=on, convert_ucode=on,
                                                      maxdirsize=167772, schedsnapname=create_time,
                                                      fs_size_fixed=off, guarantee=none,
                                                      svo_enable=off, svo_checksum=off,
                                                      svo_allow_rman=off, svo_reject_errors=off,
                                                      no_i2p=off, fractional_reserve=0, extent=off,
                                                      try_first=snap_delete, read_realloc=off,
                                                      snapshot_clone_dependency=off,
                                                      dlog_hole_reserve=off, nbu_archival_snap=off
                         Volume UUID: 871751a8-9a2d-4e9c-887c-ba5677e771d6
                Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal, block checksums

        Snapshot autodelete settings forvol_test:
                                        state=on
                                        commitment=try
                                        trigger=volume
                                        target_free_space=15%
                                        delete_order=oldest_first
                                        defer_delete=user_created
                                        prefix=(not specified)
                                        destroy_list=none
        Volume autosize settings:
                                mode=off
        Hybrid Cache:
                Eligibility=read-write

 

 

i had a lun on this volume:

 

/vol/vol_test/qt1/lun    6.6t (7256794406400) (r/w, online, mapped)
                Comment: " "
                Serial#:xxxxxxx
                Share: none
                Space Reservation: disabled
                Multiprotocol Type: windows_2008
                Maps: ig_test=4 ig_test=4
                Occupied Size:    5.3t (5873111318528)
                Creation Time: Tue Jan 12 12:32:57 CET 2016
                Cluster Shared Volume Information: 0x1
                Read-Only: disabled

 

 

thanks for your help

1 ACCEPTED SOLUTION

niels
9,969 Views

Hi ol15,

 

what exactly is your concern?

Is it that the volume shows 87% full although you only wrote 4862GB of data into a volume with 8394GB in size, which should be about 58% rather than 87%?

 

Well, that's because the volume is thin provisioned (guarantee=none) and you allocated more space for the volume than the aggregate has physically available.

 

The volume shows 1087GB as available free space.

 

df -g /vol/vol_test/
Filesystem               total       used      avail capacity  Mounted on
/vol/vol_test/     8394GB     4862GB     1087GB      87%  /vol/vol_test/
/vol/vol_test/.snapshot        0GB        0GB        0GB       0%  /vol/vol_test/.snapshot

 

 

That's exactly what's left on your aggregate:

 

aggr show_space -g
aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape
         7532GB           753GB             0GB          6778GB             0GB             2GB             0GB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol1            0GB             0GB            none
vol2             0GB             0GB            none
vol3             0GB             0GB            none
vol_test          4962GB          4943GB            none
vol0                                718GB             5GB          volume


Aggregate                       Allocated            Used           Avail
Total space                        5688GB          4955GB          1087GB
Snap reserve                          0GB           209GB             0GB
WAFL reserve                        753GB            77GB           675GB

 

 

So the free space percentage that you see is directly correlated to the actual physical free space.

If you'd add more disks to the aggregate or reduce the actual logical volume size to a value less than the aggregate space, then the free space percentage will be accurate.

 

But with thin provisioning the actual free space of the volume is not of much interest. The volume in this case is just a management entity. Deduplications and SnapShots happen on the volume basis, so that's ehy you have that in the first place.

Instead it's more important to monitor the aggregate free space. And the aggregate capacity and free space calculations are accurate, which you can see from the aggr show_space -g command above.

 

 

You basically use:

vol0: 718GB (although only 5GB used, but that volume is and should be thick provisioned, meaning "garantee=volume")

LUN Occupied Size:    5.3t (so you have written at least 5.3TB to that LUN - at least once)

A-SIS savings : 2GB (deduplication saved you 2GB)

Snap reserve: 209GB (your SnapShots occupy this amount of space)

WAFL reserve: 753GB (that's a fixed 10% reserve)

 

So from the 6778GB of usable capacity in your aggrgeate you use 4955GB and have 1087GB free space left

 

Hope that helps.

 

Kind regards, Niels

 

 

 

 

 

 

View solution in original post

7 REPLIES 7

asulliva
9,995 Views

Is the free space going up over time?

 

Andrew

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

ol15
9,975 Views

hello,

 

it was the first time we wrote on this lun.

and it's a new filer.

 

regards

niels
9,970 Views

Hi ol15,

 

what exactly is your concern?

Is it that the volume shows 87% full although you only wrote 4862GB of data into a volume with 8394GB in size, which should be about 58% rather than 87%?

 

Well, that's because the volume is thin provisioned (guarantee=none) and you allocated more space for the volume than the aggregate has physically available.

 

The volume shows 1087GB as available free space.

 

df -g /vol/vol_test/
Filesystem               total       used      avail capacity  Mounted on
/vol/vol_test/     8394GB     4862GB     1087GB      87%  /vol/vol_test/
/vol/vol_test/.snapshot        0GB        0GB        0GB       0%  /vol/vol_test/.snapshot

 

 

That's exactly what's left on your aggregate:

 

aggr show_space -g
aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape
         7532GB           753GB             0GB          6778GB             0GB             2GB             0GB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol1            0GB             0GB            none
vol2             0GB             0GB            none
vol3             0GB             0GB            none
vol_test          4962GB          4943GB            none
vol0                                718GB             5GB          volume


Aggregate                       Allocated            Used           Avail
Total space                        5688GB          4955GB          1087GB
Snap reserve                          0GB           209GB             0GB
WAFL reserve                        753GB            77GB           675GB

 

 

So the free space percentage that you see is directly correlated to the actual physical free space.

If you'd add more disks to the aggregate or reduce the actual logical volume size to a value less than the aggregate space, then the free space percentage will be accurate.

 

But with thin provisioning the actual free space of the volume is not of much interest. The volume in this case is just a management entity. Deduplications and SnapShots happen on the volume basis, so that's ehy you have that in the first place.

Instead it's more important to monitor the aggregate free space. And the aggregate capacity and free space calculations are accurate, which you can see from the aggr show_space -g command above.

 

 

You basically use:

vol0: 718GB (although only 5GB used, but that volume is and should be thick provisioned, meaning "garantee=volume")

LUN Occupied Size:    5.3t (so you have written at least 5.3TB to that LUN - at least once)

A-SIS savings : 2GB (deduplication saved you 2GB)

Snap reserve: 209GB (your SnapShots occupy this amount of space)

WAFL reserve: 753GB (that's a fixed 10% reserve)

 

So from the 6778GB of usable capacity in your aggrgeate you use 4955GB and have 1087GB free space left

 

Hope that helps.

 

Kind regards, Niels

 

 

 

 

 

 

ol15
9,959 Views

hello niels,

 

thanks a lot for your clear explain and time used to give it.

 

i have a last question, can i reduce the volume size (-500GB) without data loss?

 

then it doesn't change the percentage of the df command. am i right?

niels
9,956 Views

Hi ol15,

 

you can reduce the size of the volume until you hit the actually used capacity without losinf any data.

You should only make sure that the volume has at least the size of the LUN you provisioned, which is 6.6TB, so -500GB is safe.

 

The free-space percentage of the volume will change but the actual free-space will not.

 

As said earlier, you should not be concerned about the actual usage of the volume as you use thin provisoining. The only entity of interest is the aggregate. Once you run out of space there, your LUNs will go offline.

 

You could use our managebility software called "OnCommand Unified Manager" in order to help you monitor and alert on the actual free-space situation, growth rates, over-provisioning rates and so on to handle the risk of over-provisioning and make most use of your actual storage.

 

Thin provoisioning may become dangerous in case you don't have a proper monitoring and alerting for the actual free space in place as well as a process of remediating any space issues (either by adding disks or deleting data). It would be the same as having a smoke detector without a fire extenguisher.

 

Kind regards, Niels

 

 

 

JGPSHNTAP
9,938 Views

It is showing like this, b/c all the vols are THIN provisioned

ol15
9,870 Views

hello niels,

 

thank you for your reply and your explanation

Public