I know this will all go away when we upgrade to 64bit - but this was very frustrating:
we have an existing aggregate of 2 x DS4243 shelves (48 disks x 266gb right sized)
Ontap shows the existing capacity of 9.8Tb (via df -Ah)
We went to add one more shelf (24 x 266Gb) and found the size slightly larger than the 16Tb limit:
aggr add aggr2 -d 3d.02.0 3d.02.1 3d.02.2 3d.02.3 3d.02.4 3d.02.5 3d.02.6 3d.02.7 3d.02.8 3d.02.9 3d.02.10 3d.02.11 3d.02.12 3d.02.13 3d.02.14 3d.02.15 3d.02.16 3d.02.17 3d.02.18 3d.02.19 3d.02.20 3d.02.21
Note: preparing to add 20 data disks and 2 parity disks.
Continue? ([y]es, [n]o, or [p]review RAID layout) y
Aggregate size 16.08 TB exceeds limit 16.00 TB
File system size 16.08 TB exceeds maximum 15.99 TB
aggr add: Can not add specified disks to the aggregate because the aggregate size limit for this system type would be exceeded.
Ok, fine, we will add 21/24 disks instead - (overkill on the spares for now)
Now that is not the most frustrating part (remember, we will get this back when when we go 64bit)
The frustrating part is when we add 21 disks we ended up with not ~15.8Tb (as you'd expect 16.08 - .266 = 15.817Tb)
No we end up with only 14 (note the aggregate snapshot is disabled/zero):
df -Ah
Aggregate total used avail capacity
aggr2 14TB 8324GB 6257GB 57%
aggr2/.snapshot 0TB 0TB 0TB ---%
Can someone explain why ontap appears to use two sets of books for these calculations
If there is overhead it should be included in the final useable calculations so these discrepencies are eliminated
thanks
Fletcher
http://vmadmin.info