Network and Storage Protocols

Help interpreting available space

fajarpri2

Hi all,

Can pls help me interprete the available space?

n3300> aggr show_space -h
Aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB           186GB          3538GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol0                                873GB           202GB          volume
vol2                                252GB           106GB          volume

Aggregate                       Allocated            Used           Avail
Total space                        1126GB           309GB          2615GB
Snap reserve                        186GB            40MB           186GB
WAFL reserve                        413GB            42MB           413GB

--------------

n3300> lun show
    /vol/vol0/qtvms1a/lun0        60g (64424509440)   (r/w, online, mapped)
    /vol/vol0/qtvms4a/lun0       180g (193273528320)  (r/w, online, mapped)
    /vol/vol0/qtvms5a/lun0        60g (64424509440)   (r/w, online, mapped)

---------------

n3300> df -h
Filesystem               total       used      avail capacity  Mounted on
/vol/vol0/               671GB      495GB      176GB      74%  /vol/vol0/
/vol/vol0/.snapshot        0KB     4324KB        0KB     ---%  /vol/vol0/.snapshot
/vol/vol2/               250GB      249GB      486MB     100%  /vol/vol2/
/vol/vol2/.snapshot        0MB        0MB        0MB     ---%  /vol/vol2/.snapshot

---------------

1. Aggr0 has total space of 3.5TB. Correct?

2. How much space is available in vol0? I think it should be around 600GB, but why the df -h command shows only around 176GB?

Thank you.

1 ACCEPTED SOLUTION

eric_barlier

sorry my mistake, it was a typo:

snap sched -A aggr0

snap reserve -A aggr0

If you decide to claim back 186g you can run

df -Ag aggr0

snap reserve -A aggr0 0

df -Ag aggr0                     > see that free space increased.

To fix your issue you can increase vol0 size. > vol size vol0  new _size

Eric

View solution in original post

34 REPLIES 34

eric_barlier

Hi,

1. You have put LUNs in the same volume as the system files, vol0. It is NetApp best practice and industry standard to NOT mix vol0 with any LUNs or other way of serving

    data.

2. You could easily reclaim some space by turning snap reserve to 0 (zero) on your aggregate and having no snapshots at aggr. level. Unless you ever think you will overwrite your entire    

    aggregate there is no need for this space to be reserved. I d recommend turning snapshots off and having no snap reserve, but its your call. You might have the option to use it.

Regards,

Eric

aborzenkov

He has volume of size 671G (as confirmed by vol status -b). How comes that aggr show_space shows that 873G are allocated for this volume? What consumes extra 202G? How is it related to either system volume or aggregate snap reserve?

eric_barlier

My posting was pretty clear, I was providing a fix rather than trying to guess without info from the controller.

I ll explain for you if you want: If you have got no more disk for your volumes in your aggr. you can turn aggr. snap reserve off and get more space for your volumes.

I reckon you know this though.

Eric

fajarpri2

Hi, it's me again. I think it's still related to this question.

Today, when I try to create a LUN on vol3, it errors:

n3300> igroup create -i -t vmware ig_sgvms12 iqn.1998-01.com.vmware:sgvms12
Mon Nov 23 10:01:24 GMT [lun.igroup.tmpFileWriteFailed:error]: Failed to write to the temporary igroup metafile (Initiator group change not stored on disk; unable to write new file).
igroup create: Initiator group change not stored on disk; unable to write new file

Mon Nov 23 10:50:00 GMT [wafl.vol.full:notice]: file system on volume vol0 is full

I've been reading Netapp forum and found 2 possible solutions:

1. Set Fractional Reserve to zero. But it's very risky because the LUN will go offline if space is out.

But, if I keep total LUN size to be lower than the Volume size, I should be safe, right?

2. Set snapshot off, but I have turned off snapshot on aggr0, haven't I?

Is there any workaround for this situation?

Is it by increasing the size of vol0?

Thank you.

This is related data:

n3300> aggr status
           Aggr State      Status            Options
          aggr1 online     raid4, aggr      
          aggr0 online     raid4, aggr       root, nosnap=on

n3300> df -hr
Filesystem               total       used      avail   reserved  Mounted on
/vol/vol0/               671GB      671GB        0MB      242GB  /vol/vol0/
/vol/vol0/.snapshot        0KB     4432KB        0KB        0KB  /vol/vol0/.snapshot
/vol/vol1/              1800GB     1800GB        0MB        0MB  /vol/vol1/
/vol/vol1/.snapshot        0MB        0MB        0MB        0MB  /vol/vol1/.snapshot
/vol/vol2/               250GB      249GB      471MB        0MB  /vol/vol2/
/vol/vol2/.snapshot        0MB        0MB        0MB        0MB  /vol/vol2/.snapshot
/vol/vol3/               500GB      499GB       17MB        0MB  /vol/vol3/
/vol/vol3/.snapshot        0MB        0MB        0MB        0MB  /vol/vol3/.snapshot

n3300> df -gA
Aggregate                total       used      avail capacity 
aggr0                   3538GB      922GB     2615GB      26% 
aggr0/.snapshot          186GB        0GB      186GB       0% 
aggr1                   3538GB     2302GB     1236GB      65% 
aggr1/.snapshot          186GB       10GB      175GB       6%

n3300> aggr show_space -h
Aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB           186GB          3538GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol0                                922GB           250GB          volume
vol2                                252GB           107GB          volume

Aggregate                       Allocated            Used           Avail
Total space                        1174GB           358GB          2615GB
Snap reserve                        186GB            40MB           186GB
WAFL reserve                        413GB            56MB           413GB


Aggregate 'aggr1'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB           186GB          3538GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol1                               1821GB          1519GB          volume
vol3                                500GB           712KB          volume

Aggregate                       Allocated            Used           Avail
Total space                        2321GB          1519GB          1236GB
Snap reserve                        186GB            10GB           175GB
WAFL reserve                        413GB           206MB           413GB

n3300> vol status vol0 -v
         Volume State      Status            Options                     
           vol0 online     raid4, flex       root, diskroot, nosnap=on,  
                                             nosnapdir=off, minra=off,
                                             no_atime_update=off, nvfail=off,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             create_ucode=off,
                                             convert_ucode=off,
                                             maxdirsize=9175,
                                             schedsnapname=ordinal,
                                             fs_size_fixed=off,
                                             guarantee=volume, svo_enable=off,
                                             svo_checksum=off,
                                             svo_allow_rman=off,
                                             svo_reject_errors=off,
                                             no_i2p=off,
                                             fractional_reserve=100,
                                             extent=off,
                                             try_first=volume_grow
        Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

eric_barlier

Hi,

From what I am seeing your vol0 is full, its in a different aggregate than your vol3.

vol0 is typically system volume with the operating system in it, it should never get full.

are you using it to share data?

What I would do if I was you I would map c$ on the controller and free up space.

Go into /etc/crash, look for old core dumps and delete if you dont need them.

what does

cifs shares

exportfs

give? can you post it here?

Eric

fajarpri2

Hi Eric,

This is the data:

n3300> cifs shares
Name         Mount Point                       Description
----         -----------                       -----------
ETC$         /etc                              Remote Administration
            ** priv access only **
HOME         /vol/vol0/home                    Default Share
            everyone / Full Control
C$           /                                 Remote Administration
            ** priv access only **

exportfs returns nothing. I haven't used it for NFS.

I think I have CIFS license, how do I set it up so that I can access it?

I use vol0 for LUN too.. which people say it's a mistake. I know. But, the total LUN size on vol0 is still below the volume size. It should be ok right?

eric_barlier

Hi again,

Look at this:

n3300> df -gA
Aggregate                total       used      avail capacity 
aggr0                   3538GB      922GB     2615GB      26% 
aggr0/.snapshot          186GB        0GB      186GB       0%

You have reserved 186GB for aggr. snapshots but none are used. So you could free up 186GB of space in the aggr. but setting

aggr. snap reserve to 0%. please post here:

snap sched -A vol0

snap reserve -A vol0

Furthermore you seem to have enough space to increase the volume size

vol size vol0 750g    > choose your size.

Long term I recommend moving the LUN. You can now see the impact of ignoring best practice.

Eric

fajarpri2

Thanks Eric for guiding me.

n3300> snap sched -A vol0
Aggregate vol0 does not exist or is not online.
n3300> snap reserve -A vol0
usage:
snap list [-A | -V] [-n] [-b] [-l] [[-q] [<vol-name>] | -o [<qtree-path>]]
snap create [-A | -V] <vol-name> <snapshot-name>
snap delete [-A | -V] <vol-name> <snapshot-name> |
snap delete [-A | -V] -a [-f] [-q] <vol-name>
snap delta [-A | -V] [<vol-name> [<snapshot-name>] [<snapshot-name>]]
snap rename [-A | -V] <vol-name> <old-snapshot-name> <new-snapshot-name>
snap sched [-A | -V] [<vol-name> [weeks [days [hours[@<list>]]]]]
snap reclaimable <vol-name> snapshot-name ...
snap reserve [-A | -V] [<vol-name> [percent]]
snap restore [-A | -V] [-f] [-t vol | file] [-s <snapshot-name>] [-r <restore-as-path>] <vol-name> | <restore-from-path>
snap autodelete <vol-name> [on | off | show | reset | help] |
snap autodelete <vol-name> <option> <value>...

So, to claim that 186GB, is this what I can do?

snap reserve vol0 0

eric_barlier

sorry my mistake, it was a typo:

snap sched -A aggr0

snap reserve -A aggr0

If you decide to claim back 186g you can run

df -Ag aggr0

snap reserve -A aggr0 0

df -Ag aggr0                     > see that free space increased.

To fix your issue you can increase vol0 size. > vol size vol0  new _size

Eric

View solution in original post

fajarpri2

Thank you very much Eric for the workaround. I really appreciate your very fast help

GBU.

fajarpri2

Before setting snapshot reserve to zero:

n3300> df -Ag aggr0
Aggregate                total       used      avail capacity 
aggr0                   3538GB      922GB     2615GB      26% 
aggr0/.snapshot          186GB        0GB      186GB       0%

Afterwards:

n3300> df -Ag aggr0          
Aggregate                total       used      avail capacity 
aggr0                   3724GB      922GB     2801GB      25% 
aggr0/.snapshot            0GB        0GB        0GB     ---% 
n3300> Mon Nov 23 12:42:02 GMT [wafl.snap.autoDelete:info]: Deleting snapshot 'hourly.0' in aggregate 'aggr0' to recover storage

But, still cannot create igroup:

n3300> igroup create -i -t vmware ig_sgvms12 iqn.1998-01.com.vmware:sgvms12
Mon Nov 23 12:43:57 GMT [lun.igroup.tmpFileWriteFailed:error]: Failed to write to the temporary igroup metafile (Initiator group change not stored on disk; unable to write new file).
igroup create: Initiator group change not stored on disk; unable to write new file

I guess I must increase the vol0 size? Why I cannot use the 186GB that has just been freed?

eric_barlier

"I guess I must increase the vol0 size? "

yes of course. you need to decide how big your vol0 must be. and if your LUN is by chance used by VMWARE you

can move it easily without downtime.

Eric

fajarpri2

Before I try to increase vol0 size, I dont understand this... the total LUN size on vol0 is still below vol0 size. Why the write still fails? Is it because Fractional Reserve still active?

Yes, the LUN is used by vmware. Can pls help me point to any URL/docs on how move it?

n3300> lun show
    /vol/vol0/qtvms1a/lun0        60g (64424509440)   (r/w, online, mapped)
    /vol/vol0/qtvms3a/lun0        90g (96636764160)   (r/w, online, mapped)
    /vol/vol0/qtvms4a/lun0       180g (193273528320)  (r/w, online, mapped)
    /vol/vol0/qtvms5a/lun0       120g (128849018880)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun0  500.1g (536952700928)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun1  500.1g (536952700928)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun2  500.1g (536952700928)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun3  296.0g (317875814400)  (r/w, online, mapped)
    /vol/vol2/qtsgcvs0/lun0    249.0g (267368005632)  (r/w, online, mapped)
    /vol/vol3/qtsgvms12/lun0   499.0g (535799267328)  (r/w, online)

n3300> aggr show_space -h
Aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB             0KB          3724GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol0                                922GB           251GB          volume
vol2                                252GB           107GB          volume

Aggregate                       Allocated            Used           Avail
Total space                        1175GB           359GB          2801GB
Snap reserve                          0KB             0KB             0KB
WAFL reserve                        413GB           187MB           413GB


Aggregate 'aggr1'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB           186GB          3538GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol1                               1821GB          1519GB          volume
vol3                                500GB           712KB          volume

Aggregate                       Allocated            Used           Avail
Total space                        2321GB          1519GB          1236GB
Snap reserve                        186GB            11GB           174GB
WAFL reserve                        413GB           206MB           413GB

n3300> vol status -v
         Volume State      Status            Options                     
           vol0 online     raid4, flex       root, diskroot, nosnap=on,  
                                             nosnapdir=off, minra=off,
                                             no_atime_update=off, nvfail=off,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             create_ucode=off,
                                             convert_ucode=off,
                                             maxdirsize=9175,
                                             schedsnapname=ordinal,
                                             fs_size_fixed=off,
                                             guarantee=volume, svo_enable=off,
                                             svo_checksum=off,
                                             svo_allow_rman=off,
                                             svo_reject_errors=off,
                                             no_i2p=off,
                                             fractional_reserve=100,
                                             extent=off,
                                             try_first=volume_grow
        Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

n3300> aggr status aggr0 -v
           Aggr State      Status            Options
          aggr0 online     raid4, aggr       root, diskroot, nosnap=on,
                                             raidtype=raid4, raidsize=7,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             resyncsnaptime=60,
                                             fs_size_fixed=off,
                                             snapshot_autodelete=on,
                                             lost_write_protect=on

        Volumes: vol0, vol2

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

aborzenkov

fajarpri2 wrote:

Before I try to increase vol0 size, I dont understand this... the total LUN size on vol0 is still below vol0 size.

Please show df -r output.

fajarpri2

Hi, after almost one month OK, now vol0 is full again...

This is the output of df -r... what is that reserve?

n3300> df -r
Filesystem              kbytes       used      avail   reserved  Mounted on
/vol/vol0/           756670496  756670496          0  312219028  /vol/vol0/
/vol/vol0/.snapshot          0       4556          0          0  /vol/vol0/.snapshot
/vol/vol1/          1887436800 1887436800          0          0  /vol/vol1/
Fri Dec 11 21:33:50 GMT [wafl.vol.full:notice]: file system on volume vol0 is full
/vol/vol1/.snapshot          0          0          0          0  /vol/vol1/.snapshot
/vol/vol2/           262144000  261670372     473628          0  /vol/vol2/
/vol/vol2/.snapshot          0          0          0          0  /vol/vol2/.snapshot
/vol/vol3/           524288000  524288000          0          0  /vol/vol3/
/vol/vol3/.snapshot          0          0          0          0  /vol/vol3/.snapshot

fajarpri2

Also, why vol0 is full again? What is causing it? Is it some log files that I can delete?

I have CIFS license, how do I set it up so that I can browse the nas and delete those growing files (if any)?

Thank you.

fajarpri2

I found this http://filers.blogspot.com/2006/09/what-is-space-reservation.html

But not quite understand.

Is it safe to lower fractional_reserve if I don't use snapshot at all?

I'm worried, because I still don't know is causing vol0 to be full. I think that fractional_reserve protect LUN in vol0 from going offline.

fajarpri2

Some more info:

n3300> snap list
Volume vol0
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  Sep 27 16:00  hourly.0      
  0% ( 0%)    0% ( 0%)  Sep 27 12:00  hourly.1      
  0% ( 0%)    0% ( 0%)  Sep 27 08:00  hourly.2      
  0% ( 0%)    0% ( 0%)  Sep 27 00:00  nightly.0     
  0% ( 0%)    0% ( 0%)  Sep 26 20:00  hourly.3      
  0% ( 0%)    0% ( 0%)  Sep 26 16:00  hourly.4     

Volume vol1
working...

No snapshots exist.

Volume vol2
working...

No snapshots exist.

Volume vol3
working...

No snapshots exist.

fajarpri2

I've just increased vol0 size by 1GB, and it gets full again in 2 minutes.

What happens?

fajarpri2

Anybody? Pls?

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public