2013-01-14 02:37 AM
I recently noticed that I get alot of messages about autogrow for some volumes.
I am aware that the limit is reached for the autogrowth.
However there is 30% free space in the volume and still I get a mesage about the autgrow failing.
Why does it keep trying to grow while there is enough space?
And it's a bit annoying that this message repeats over and over again.
Mon Jan 14 11:30:53 CET [wafl.vol.autoSize.fail:info]: Unable to grow volume 'vol19' to recover space: Volume cannot be grown beyond maximum growth limit
filer1> vol status -v vol19
Volume State Status Options
vol19 online raid_dp, flex nosnap=on, nosnapdir=on,
sis minra=off, no_atime_update=off,
Volume UUID: xxxxxxxxxxxxxxxxxxx
Containing aggregate: 'aggr1'
Plex /aggr1/plex0: online, normal, active
RAID group /aggr1/plex0/rg0: normal
RAID group /aggr1/plex0/rg1: normal
Snapshot autodelete settings for vol19:
Volume autosize settings:
Solved! SEE THE SOLUTION
2013-01-15 05:29 AM
What is the current size of the volume?
Can you post a "df -k /vol/vol19" output?
I had a similar case when the volume size was manually increased so it was more than the maximum defined in autosize. This was causing a flood of "unable to grow..." messages in syslog.
I even opened a case because I was convinced that if there is more than 20% free space autosize should not do anything. It turned out that this is by design i.e. if the volume is already over the maximum, the warnings will be generated regardless of volume space usage. In future ONTAP versions the warnings should have a lower frequency, but the only solution is to set autosize maximum to be more than current volume size.
2013-01-17 06:25 AM
The size is indeed bigger than the maximum defined in autosize.
So this explains the messages.
However it is a strange brain twist to make it give these messages even if it hasn't to do any expansion of the volume.
On the other hand, it alerts you of the situation that in case ONTAP wants to autogrow, it will not work and your volume might run out of space.
Thanks for your reponse on this!