Aggregate creation - how to maximize useable space
2017-03-29 06:51 AM
Hi, apologises but I’m new to NetApp storage systems.
My company has a NetApp storage system in a HA pair;
2 x FAS3220 7-mode 8.1.3
To future proof the system we’re adding another DS2246 shelf with 24 x 900GB 10K SAS disks. We will have to create a new aggregate from this new shelf.
My problem is calculating the useably space I’ll end up with from the new aggregate as I’m having trouble understanding how the overhead is calculated.
The method used to create the previous aggregates was as follows;
Assign new disks to controller
Create new aggregate from hot spares; 24 x 900gb disks, two required for hot spare leaves 22 disks
aggr create new_aggr_sas01 -r 22 -B 64 -t raid_dp 22@762
This has led me to question how the aggregates were created as I don’t believe we got the most out of the shelves. I understand –r 22 is raid size of 22 disks, -B 64 is 64 bit aggregate, -t raid_dp is the type of raid, it’s the last bit that vexes me, number of disks @ disk-size in gigabytes? I believe the figure is derived from the usable capacity takeaway 10% ONTAP overhead;
1 TB SATA usable capacity is 847,555 – 10% = 762gb
Firstly I believe this was wrong as the disks were 900gb SAS disks so should have been;
900gb SAS usable capacity is 857,000 – 10% = 771gb
Over 22 disks we’ve lost 198gb, not a great loss but should we have used it in the first place? I’m getting a bit confused with the overhead, most examples for creating an aggregate online don’t include the number of disks @ disk-size in gigabytes part, should it have been used or should it have been?;
aggr create new_aggr_sas01 -r 22 -B 64 -t raid_dp
It just seems to me wafl is reducing the disk to 847gb usable capacity, we’re reducing it further by 10% to 762gb (but should have been 771gb?) by using ‘number of disks @ disk-size’ in gigabytes part, then when the aggregate is created we lose another 10% for wafl reserve?;
ournap01> aggr show_space new_aggr_sas01 -g
Total space WAFL reserve Snap reserve Usable space BSR NVLOG A-SIS Smtape
16737GB 1673GB 0GB 15064GB 0GB 71GB 0GB
Space allocated to volumes in the aggregate
Volume Allocated Used Guarantee
Aggregate Allocated Used Avail
Total space 10156GB 4396GB 4835GB
Snap reserve 0GB 0GB 0GB
WAFL reserve 1673GB 177GB 1496GB
So 22 x 900 = 19800GB – 15064GB = a loss of nearly 5TB on overhead? How would you have created the aggregates? Appreciate your help!
1 REPLY 1
Re: Aggregate creation - how to maximize useable space
2017-07-04 07:40 AM
Hi, i realise this is a little old, however just to confirm the command you used was correct.
While, the disk size you used to create the aggregate was indeed using the right sized capacity for the 1TB disks, since the 900GB disks are within the limits of that figure* then it will have used the correct 900GB drives. Importantly, it will not create the aggregate at that size, just look for disks it can use within those parameters - the resultant aggregate will be sized according to the right sized capacity of those disks.
For the future, the figure to use in the @size argument can be obtained from the sysconfig -r output.
Also, to confirm 900GB is the market size of the disks, but they are physically 858,483MiB and as you stated right sized down to 857,000MiB. The 10% WAFL overhead is required for operations.
You are therefore looking at the 2 disks for parity (RAID-DP), 10% WAFL reserve and the difference between the physcial and right size, which totals the "overhead".
Appreciate its late, but hope it helps.
* disks that are within 10% of the specified size will be selected for use in the aggregate. - https://library.netapp.com/ecmdocs/ECMP1196461/html/cmdref/man1/na_aggr.1.html