Subscribe

Aggregate creation - how to maximize useable space

Hi, apologises but I’m new to NetApp storage systems.

 

My company has a NetApp storage system in a HA pair;

 

2 x FAS3220 7-mode 8.1.3

 

To future proof the system we’re adding another DS2246 shelf with 24 x 900GB 10K SAS disks. We will have to create a new aggregate from this new shelf.

 

My problem is calculating the useably space I’ll end up with from the new aggregate as I’m having trouble understanding how the overhead is calculated.

 

The method used to create the previous aggregates was as follows;

 

Assign new disks to controller

 

Create new aggregate from hot spares; 24 x 900gb disks, two required for hot spare leaves 22 disks

 

aggr create new_aggr_sas01  -r 22 -B 64 -t raid_dp 22@762

 

This has led me to question how the aggregates were created as I don’t believe we got the most out of the shelves. I understand –r 22 is raid size of 22 disks, -B 64 is 64 bit aggregate, -t raid_dp is the type of raid, it’s the last bit that vexes me, number of disks @ disk-size in gigabytes? I believe the figure is derived from the usable capacity takeaway 10% ONTAP overhead;

 

1 TB SATA usable capacity is 847,555 – 10% = 762gb

 

Firstly I believe this was wrong as the disks were 900gb SAS disks so should have been;

 

900gb SAS usable capacity is 857,000 – 10% = 771gb

 

Over 22 disks we’ve lost 198gb, not a great loss but should we have used it in the first place? I’m getting a bit confused with the overhead, most examples for creating an aggregate online don’t include the number of disks @ disk-size in gigabytes part, should it have been used or should it have been?;

 

aggr create new_aggr_sas01  -r 22 -B 64 -t raid_dp

 

It just seems to me wafl is reducing the disk to 847gb usable capacity, we’re reducing it further by 10% to 762gb (but should have been 771gb?) by using ‘number of disks @ disk-size’ in gigabytes part, then when the aggregate is created we lose another 10% for wafl reserve?;

 

ournap01> aggr show_space new_aggr_sas01  -g

Aggregate ‘new_aggr_sas01’

 

    Total space    WAFL reserve    Snap reserve    Usable space       BSR NVLOG           A-SIS          Smtape

        16737GB          1673GB             0GB         15064GB             0GB            71GB             0GB

 

Space allocated to volumes in the aggregate

 

Volume                          Allocated            Used       Guarantee

 

Aggregate                       Allocated            Used           Avail

Total space                       10156GB          4396GB          4835GB

Snap reserve                          0GB             0GB             0GB

WAFL reserve                       1673GB           177GB          1496GB

 

So 22 x 900 = 19800GB – 15064GB = a loss of nearly 5TB on overhead? How would you have created the aggregates? Appreciate your help!