ONTAP Discussions

using or reserving 99% of space and 0% of inodes, using 99% of reserve

javierb
5,703 Views

Hello all

I have a customer that has come accross with this message-alert after resizing a volume

/vol/MKDATA is full (using or reserving 99% of space and 0% of inodes, using 99% of reserve)

Apparently he has no problem in doing things and the behaviour of that volume and the overall controller has not changed.

After digging up in NOW, dlists, Majordomo and so on, I have not found anything but a similar question posted with no answer.

Whatever help, tip, hint or advice is welcome

Javier Barea

3 REPLIES 3

radek_kubka
5,703 Views

Hi Javier,

Well, the first thing on my mind is: if the message says volume is full, maybe it is just full?

Other than that:

- are there any LUNs in this volume?

- what's the fractional_reserve set to? (vol status -v)

- what's the output of df -r command?

Regards,

Radek

javierb
5,703 Views

Radek

well as you said the volume is almost full

FASADM02> vol status MKDATA -v
         Volume State           Status            Options                     
         MKDATA online          raid_dp, flex     nosnap=off, nosnapdir=off,  
                                                  ...                                                  no_i2p=off,

                                        fractional_reserve=0,
                                        ...
                Containing aggregate: 'aggr0'

                Plex /aggr0/plex0: online, normal, active
                    RAID group /aggr0/plex0/rg0: normal

        Snapshot autodelete settings for MKDATA:
                                        state=on
                                        commitment=disrupt
                                        trigger=volume
                                        target_free_space=6%
                                        delete_order=oldest_first
                                        defer_delete=user_created
                                        prefix=(not specified)
                                        destroy_list=none
        Volume autosize settings:
                                        state=off

FASADM02> lun show -v /vol/MKDATA/q_MKDATA/MKDATA.lun
        /vol/MKDATA/q_MKDATA/MKDATA.lun  285.0g (306047877120)  (r/w, online, mapped)
                Comment: "Lun para MKDATA"
                Serial#: P3TdOoULbtXt
                Share: none
                Space Reservation: enabled
                Multiprotocol Type: solaris
                Maps: solaris1=0

FASADM02> df -r MKDATA
Filesystem              kbytes       used      avail   reserved  Mounted on
/vol/MKDATA/         304160444  299632756    4527688          0  /vol/MKDATA/
/vol/MKDATA/.snapshot    3072324     328196    2744128          0  /vol/MKDATA/.snapshot

There is snapshot reserve of 3% for this volume although fractional_reserve is 0% quite a bit against Best Practices for volume for LUN.

So I infer that volume is almost full and that reason of the alert message.

Regards

radek_kubka
5,703 Views

OK, few things:

Setting fractional reserve to 0% is actually the best practice these days. The caveat is a volume containing LUNs should be set to autogrow & (ideally) snapshots to autodelete. See Chris' blog for thorough explanation:

http://communities.netapp.com/groups/chris-kranz-hardware-pro/blog/2009/03/05/fractional-reservation--lun-overwrite

The volume you are dealing with is simply too tiny for the LUN it has inside. You have a number of options:

- IF there is less data in the LUN than its nominal size, change LUN space reservation to disabled - only used blocks within LUN will be shown as used within volume

- grow the volume (providing there is free space in containing aggregate)

- grow the volume and change its space guarantee to none

I'd personally keep everything thinly provisioned (i.e. both volume & LUN), volume set to autosize & then focus on watching closely how much free space is left in the aggregate.

Regards,
Radek

Public