Subscribe

Request to grow volume failed, volume size is greater than the maximum size

I just added about 4TB of space in 5 new disks to an existing aggregate (64-bit), yet I do not seem to be able to grow the volume on that aggregate to take up the new space. DF for the volume also shows several TB available:

> df -Ah

Aggregate                total       used      avail capacity

aggr1                     43TB       37TB     6235GB      86%

aggr1/.snapshot            0GB       51GB        0GB     ---%

aggr0                   3532GB     2912GB      619GB      82%

aggr0/.snapshot          185GB       15GB      170GB       8%

> df -h vol5

Filesystem               total       used      avail capacity  Mounted on

/vol/vol5/                44TB       36TB     6235GB      86%  /vol/vol5/

snap reserve             920GB      449GB      471GB      49%  /vol/vol5/..

But vol size says I only have 57.9G available (this vol is the only vol on the aggr):

> vol size vol5 +3000g
vol size: Request to grow volume 'vol5' failed because the resulting volume size is greater than the maximum size. Select a growth of at most +57.9GB.

Never seen this happen: What is happening here? And why is the total size of the volume larger than that of the aggregate?

I did grow the aggr by increasing the raid group size by 1 and adding a new disk to each raid group. Was that a big mistake?

OnTap version 8.1.2...

Thanks in advance,

w

Re: Request to grow volume failed, volume size is greater than the maximum size

You didn't mention your hardware type.  Maybe you're hitting the maximum volume size for your model?  Check in the support site.

Re: Request to grow volume failed, volume size is greater than the maximum size

You do not say what filer model you have, Data ONTAP version and in which aggregate volume resides. Not to mention that “df -h” is pretty unsuitable for real analysis - it is for human overview.

“aggr show_space” would give better view on actual aggregate space consumption and “vol size” would show exact size (not arbitrarily rounded up or down).

Re: Request to grow volume failed, volume size is greater than the maximum size

This is an FAS3140, ontap 8.1.2 . The only information I oculd find on this legacy model listed a 16TB aggr/vol limit (ontab 7, probably), but assuming it's the same as the low-end 3200's there may be a 50TB limit on aggrs and vols. I had no idea that OnTAP would let me waste disks by letting me add them to an aggr and not be able to use them. Right now I have 17 disk per rg * & rg = 117 1/2TB disks in the system.

The numbers still don't add up, I should have some room to grow vol5 according to aggr show_space:

> aggr show_space -h aggr1

Aggregate 'aggr1'

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape

           48TB          4960GB             0KB            43TB             0KB             0KB             0KB

Space allocated to volumes in the aggregate

Volume                          Allocated            Used       Guarantee

vol5                                 37TB            37TB            none

Aggregate                       Allocated            Used           Avail

Total space                          37TB            37TB          6201GB

Snap reserve                          0KB            45GB             0KB

WAFL reserve                       4960GB           506GB          4453GB

Although df shows:


/vol/vol5/                44TB       36TB     6201GB      86%  /vol/vol5/
snap reserve             920GB      458GB      462GB      50%  /vol/vol5/.

I am going to call support on this one. Looks like I have to destroy this aggr and recreate two aggrs to replace it.


Re: Request to grow volume failed, volume size is greater than the maximum size

You have 43TB in aggregate; you cannot increase volume beyond this size. Could you clarify what your question is?

Re: Request to grow volume failed, volume size is greater than the maximum size

Netapp says Ontap 8.1.2 on a FAS3140 has:

- 75TB limit on aggr size

- 50TB limit on volume size

- 16TB limit (legacy) on LUN size

I think I misread the free space on aggr1's df -A output as a side effect of not having space reservation enabled for the volume. I almost always see space reservation (or whatever it is called) turned on, so that a volume that is grown to fill out 100% of an aggregate makes the aggregate 100% full, even if there is little or no data in the volume itself. So there doesn't seem to be a problem with my aggr, just my misinterpretation of the DF data.

Thanks to all who replied!