Subscribe

wafl_reclaim_threshold-*

Hi,

I am using FAS3170 and FAS6080 running with ONTAP v8.0.1P3.  I want to change the volume autogrow trigger threshold on 80% regardless of volume size means tiny, small, medium, large and extra-large volume.  Can anybody help me to achive the same on the filer.  Also let me know if there is any risk factor doing this changes in the filer.

Thanks in advance.

Regards,

Binod

Re: wafl_reclaim_threshold-*

You can change it on the filer by logging into priv set diag mode..First verify your thresholds before changing it.In most cases, default makes sense but again it varies case by case and your environment.

Let me explain the risk before providing the command.

First of all you need to understand the data size being written to your volumes. If you have a 1TB volume with 98% threshold, you have 20GB left.If you change to smaller percentage, it will start consuming the free space quicker and you would run out of space on the volume.

Also consider, what is the overcommit on your aggregate. If you already overcommited, Use IScsi LUNs use caution.

Think of enough warnings...Here is how you change the settings.

To print the existing values, Type

priv set diag;print flag wafl_reclaim_threshold

To set new values, BASED UPON YOUR ENVIRONMENT(see my writing in bold make sure you do some ground work).

priv set diag;set flag wafl_reclaim_threshold_t (tiny volumes <20GB)

priv set diag;set flag wafl_reclaim_threshold_s (small volumes >20GB & <100GB)

priv set diag;set flag wafl_reclaim_threshold_m (medium volumes >100GB & <500GB)

priv set diag;set flag wafl_reclaim_threshold_l (large volumes >500GB <1TB)

priv set diag;set flag wafl_reclaim_threshold_xl (tiny volumes >1TB)

Re: wafl_reclaim_threshold-*

Hi,

Thanks for the reply with detailed explaination.  Now I am clear about wafl_reclaim_threshold_* parameter.  Could you please explain more about aggregate overcommit parameter with some example and it's impact.

Regards,

Binod

Re: wafl_reclaim_threshold-*

Overcommit comes from an idea that file systems are under used when assigned for OS and traditionally there was no visibility from the storage whether the space has been used up. Traditionally, once you allocate the space it's gone forever from the back-end.

With the Thin Provisioning, Storage blocks will only be allocated when the host write the data to it instead of giving up big chunk in the form CIFS/NFS/iSCSI.

Another feauture, Dedupe which find the duplicate blocks in a volume and creates a single reference. Best example is running 50 VM with windows OS. Since most of the OS files are same in essence it refer to set of common blocks.

Both of these allows to allocate more space than what physically exists but proper monitoring needs to be established once the aggr reaches certain percentage(may be 70%) available space, additional procurement/reblancing would be needed.

Again, Each environment is different and depending upon the environment proper thresholds should set. This could also depends up on your org procurement process and amount of time needed for additional hardware.