Network and Storage Protocols

Help interpreting available space

fajarpri2
13,600 Views

Hi all,

Can pls help me interprete the available space?

n3300> aggr show_space -h
Aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB           186GB          3538GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol0                                873GB           202GB          volume
vol2                                252GB           106GB          volume

Aggregate                       Allocated            Used           Avail
Total space                        1126GB           309GB          2615GB
Snap reserve                        186GB            40MB           186GB
WAFL reserve                        413GB            42MB           413GB

--------------

n3300> lun show
    /vol/vol0/qtvms1a/lun0        60g (64424509440)   (r/w, online, mapped)
    /vol/vol0/qtvms4a/lun0       180g (193273528320)  (r/w, online, mapped)
    /vol/vol0/qtvms5a/lun0        60g (64424509440)   (r/w, online, mapped)

---------------

n3300> df -h
Filesystem               total       used      avail capacity  Mounted on
/vol/vol0/               671GB      495GB      176GB      74%  /vol/vol0/
/vol/vol0/.snapshot        0KB     4324KB        0KB     ---%  /vol/vol0/.snapshot
/vol/vol2/               250GB      249GB      486MB     100%  /vol/vol2/
/vol/vol2/.snapshot        0MB        0MB        0MB     ---%  /vol/vol2/.snapshot

---------------

1. Aggr0 has total space of 3.5TB. Correct?

2. How much space is available in vol0? I think it should be around 600GB, but why the df -h command shows only around 176GB?

Thank you.

1 ACCEPTED SOLUTION

eric_barlier
12,192 Views

sorry my mistake, it was a typo:

snap sched -A aggr0

snap reserve -A aggr0

If you decide to claim back 186g you can run

df -Ag aggr0

snap reserve -A aggr0 0

df -Ag aggr0                     > see that free space increased.

To fix your issue you can increase vol0 size. > vol size vol0  new _size

Eric

View solution in original post

34 REPLIES 34

eric_barlier
5,989 Views

"I guess I must increase the vol0 size? "

yes of course. you need to decide how big your vol0 must be. and if your LUN is by chance used by VMWARE you

can move it easily without downtime.

Eric

fajarpri2
5,989 Views

Before I try to increase vol0 size, I dont understand this... the total LUN size on vol0 is still below vol0 size. Why the write still fails? Is it because Fractional Reserve still active?

Yes, the LUN is used by vmware. Can pls help me point to any URL/docs on how move it?

n3300> lun show
    /vol/vol0/qtvms1a/lun0        60g (64424509440)   (r/w, online, mapped)
    /vol/vol0/qtvms3a/lun0        90g (96636764160)   (r/w, online, mapped)
    /vol/vol0/qtvms4a/lun0       180g (193273528320)  (r/w, online, mapped)
    /vol/vol0/qtvms5a/lun0       120g (128849018880)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun0  500.1g (536952700928)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun1  500.1g (536952700928)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun2  500.1g (536952700928)  (r/w, online, mapped)
    /vol/vol1/qtatlantic/lun3  296.0g (317875814400)  (r/w, online, mapped)
    /vol/vol2/qtsgcvs0/lun0    249.0g (267368005632)  (r/w, online, mapped)
    /vol/vol3/qtsgvms12/lun0   499.0g (535799267328)  (r/w, online)

n3300> aggr show_space -h
Aggregate 'aggr0'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB             0KB          3724GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol0                                922GB           251GB          volume
vol2                                252GB           107GB          volume

Aggregate                       Allocated            Used           Avail
Total space                        1175GB           359GB          2801GB
Snap reserve                          0KB             0KB             0KB
WAFL reserve                        413GB           187MB           413GB


Aggregate 'aggr1'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG
         4138GB           413GB           186GB          3538GB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee
vol1                               1821GB          1519GB          volume
vol3                                500GB           712KB          volume

Aggregate                       Allocated            Used           Avail
Total space                        2321GB          1519GB          1236GB
Snap reserve                        186GB            11GB           174GB
WAFL reserve                        413GB           206MB           413GB

n3300> vol status -v
         Volume State      Status            Options                     
           vol0 online     raid4, flex       root, diskroot, nosnap=on,  
                                             nosnapdir=off, minra=off,
                                             no_atime_update=off, nvfail=off,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             create_ucode=off,
                                             convert_ucode=off,
                                             maxdirsize=9175,
                                             schedsnapname=ordinal,
                                             fs_size_fixed=off,
                                             guarantee=volume, svo_enable=off,
                                             svo_checksum=off,
                                             svo_allow_rman=off,
                                             svo_reject_errors=off,
                                             no_i2p=off,
                                             fractional_reserve=100,
                                             extent=off,
                                             try_first=volume_grow
        Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

n3300> aggr status aggr0 -v
           Aggr State      Status            Options
          aggr0 online     raid4, aggr       root, diskroot, nosnap=on,
                                             raidtype=raid4, raidsize=7,
                                             ignore_inconsistent=off,
                                             snapmirrored=off,
                                             resyncsnaptime=60,
                                             fs_size_fixed=off,
                                             snapshot_autodelete=on,
                                             lost_write_protect=on

        Volumes: vol0, vol2

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

eric_barlier
5,716 Views

"Before I try to increase vol0 size, I dont understand this... the total LUN size on vol0 is still *below* vol0 size. Why the write still fails? Is it because *Fractional Reserve* still active?

Yes. and if you have snapshots as well.

"Yes, the LUN is used by vmware. Can pls help me point to any URL/docs on how move it?"

This is a VMware thing I have no documentation on. I believe the procedure is simple:

create a new volume and lun to move into

storage vmotion into the new LUN.

you can do this without downtime/impact. its done here all the time.

Eric

fajarpri2
5,716 Views

Thank you Eric. My issue is resolved for now by increasing the vol0 size.

n3300> df -A aggr0
Aggregate               kbytes       used      avail capacity 
aggr0               3905442432  967482016 2937960416      25% 
aggr0/.snapshot              0          0          0     ---%

n3300> vol size vol0
vol size: Flexible volume 'vol0' has size 704241696k.

n3300> df vol0
Filesystem              kbytes       used      avail capacity  Mounted on
/vol/vol0/           704241696  704241696          0     100%  /vol/vol0/
/vol/vol0/.snapshot          0       4432          0     ---%  /vol/vol0/.snapshot

Increasing the size:

n3300> vol size vol0 +50g
vol size: Flexible volume 'vol0' size set to 756670496k.

igroup command now works:

n3300> igroup create -i -t vmware ig_sgvms12 iqn.1998-01.com.vmware:sgvms12

n3300> df -A aggr0
Aggregate               kbytes       used      avail capacity 
aggr0               3905442432 1019910256 2885532176      26% 
aggr0/.snapshot              0          0          0     ---%

n3300> vol size vol0
vol size: Flexible volume 'vol0' has size 756670496k.
n3300> df vol0
Filesystem              kbytes       used      avail capacity  Mounted on
/vol/vol0/           756670496  728144768   28525728      96%  /vol/vol0/
/vol/vol0/.snapshot          0       4432          0     ---%  /vol/vol0/.snapshot

aborzenkov
5,989 Views

fajarpri2 wrote:

Before I try to increase vol0 size, I dont understand this... the total LUN size on vol0 is still below vol0 size.

Please show df -r output.

fajarpri2
5,989 Views

Hi, after almost one month OK, now vol0 is full again...

This is the output of df -r... what is that reserve?

n3300> df -r
Filesystem              kbytes       used      avail   reserved  Mounted on
/vol/vol0/           756670496  756670496          0  312219028  /vol/vol0/
/vol/vol0/.snapshot          0       4556          0          0  /vol/vol0/.snapshot
/vol/vol1/          1887436800 1887436800          0          0  /vol/vol1/
Fri Dec 11 21:33:50 GMT [wafl.vol.full:notice]: file system on volume vol0 is full
/vol/vol1/.snapshot          0          0          0          0  /vol/vol1/.snapshot
/vol/vol2/           262144000  261670372     473628          0  /vol/vol2/
/vol/vol2/.snapshot          0          0          0          0  /vol/vol2/.snapshot
/vol/vol3/           524288000  524288000          0          0  /vol/vol3/
/vol/vol3/.snapshot          0          0          0          0  /vol/vol3/.snapshot

fajarpri2
5,988 Views

Also, why vol0 is full again? What is causing it? Is it some log files that I can delete?

I have CIFS license, how do I set it up so that I can browse the nas and delete those growing files (if any)?

Thank you.

fajarpri2
5,988 Views

I found this http://filers.blogspot.com/2006/09/what-is-space-reservation.html

But not quite understand.

Is it safe to lower fractional_reserve if I don't use snapshot at all?

I'm worried, because I still don't know is causing vol0 to be full. I think that fractional_reserve protect LUN in vol0 from going offline.

fajarpri2
5,989 Views

Some more info:

n3300> snap list
Volume vol0
working...

  %/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  Sep 27 16:00  hourly.0      
  0% ( 0%)    0% ( 0%)  Sep 27 12:00  hourly.1      
  0% ( 0%)    0% ( 0%)  Sep 27 08:00  hourly.2      
  0% ( 0%)    0% ( 0%)  Sep 27 00:00  nightly.0     
  0% ( 0%)    0% ( 0%)  Sep 26 20:00  hourly.3      
  0% ( 0%)    0% ( 0%)  Sep 26 16:00  hourly.4     

Volume vol1
working...

No snapshots exist.

Volume vol2
working...

No snapshots exist.

Volume vol3
working...

No snapshots exist.

fajarpri2
5,733 Views

I've just increased vol0 size by 1GB, and it gets full again in 2 minutes.

What happens?

fajarpri2
5,734 Views

Anybody? Pls?

mkopenski
4,420 Views

It is safe to set fractional_reserve to 0 and volume guarnatee to none. With that set if you turn on A-sis you should see a substantial increase in space savings.

fajarpri2
4,418 Views

Thanks for replying. What is A-sys? Also why vol0 keeps getting full?

fajarpri2
4,420 Views

Hi guys, how are you?

Ok, I know it's against best practice to use vol0 as LUN and I have my lesson.

I have move almost all LUN out of vol0 and now it has quite free space.

df -rh

Filesystem               total       used      avail   reserved  Mounted on
/vol/vol0/               803GB      357GB      446GB      176GB  /vol/vol0/
/vol/vol0/.snapshot        0KB     4888KB        0KB        0KB  /vol/vol0/.snapshot

My question:

1. Can I safely reduce the size of vol0?

2. By how much can I reduce it?

Thank you.

Public