FAS2020 - 12 disks, Aggregates?

Hi, I would like some advice on best way to configure aggregates, I have two fas2020's, one will be co-location running asyn snapmirror.

both filers have dual heads and 12 disks each, when configuring the first one, I've seen that out of the box it comes with a aggregate consisting of 4 disks and 2 spare. The other disks are labeled as "partner" disks. My question is, should i delete the aggregate and create a new aggregate encompassing all the disks, so 10 disks + 2 spare?

If I delete the one and only aggregate, do I need to re-install any software on the filer afterwards?

sorry or the basic questions, newbie of note! 

FAS2020 - 12 disks, Aggregates?

Each FAS2020A has 2 controllers and each controller requires it's own root volume and therefore an aggregate to contain it. The minimum RAID4 aggregate size is 2 disks, 1 data and 1 parity plus you would keep 1 disk as a hot spare.

Your system has been built with 6 disks assigned to each controller and each controller has a 4 disk aggregate and 2 hot spares as you already noticed.

As your system is an active/active cluster why not share your data and therefore the load between the controllers? If either controller fails for whatever reason the other controller will takeover and serve its data as both controllers are connected to each others disks (as you saw when disks are labelled as "partner")

Another option is if one controller requires more space than the other; in that case you would have to reboot the controller requiring less space, hit Ctrl/C when prompted for Special Boot Options Menu and choose option 4a to initialise disks and create a new flexible root volume. This will then create a 3 disk RAID-DP aggregate containing the root volume and you can use software based disk assignment to assign disks to the other controller and then expand it's aggregate. If you do use this method then you will need to reinstall Data Ontap on the controller with the new aggregate, you must ensure you install the same version as is in use on the partner controller.

TBH this is all going to sound quite confusing if you're a newbie, I'd recommend having a good read through the documentation so you have an understanding of how filers manage their disks/aggregates/volumes:

Re: FAS2020 - 12 disks, Aggregates?

thanks, that has cleared up quite a bit.

Re: FAS2020 - 12 disks, Aggregates?

Just bear in mind in a dual controller setup each controller will require disks to boot from, so you can’t assign all 12 disks to one aggregate…it is a big challenge for the smaller systems as you end up losing disk…

I end up if I really need the space in one aggregate on one controller going for 10 disks on one, with RAID-DP with a hot spare…then the other controller, which basically does nothing other than be there…has two disks…in a RAID4 config…

However if you can spare the capacity the suggestion of 3 disks on one, is probably better so either RAID-4 with a spare…or RAID DP to give you some resilience in the event of failure…not best practice, but sometimes real world practice has to take over!

FAS2020 - 12 disks, Aggregates?


There is a number of threads already around this subject, with very thorough answers & hints:



Re: FAS2020 - 12 disks, Aggregates?

Great, all making much more sense now, thanks for the links, so the config I'm leaning towards is as such. Four data disks per controller without hot spares

C1: 6 disks (4D + 2P) (no hotspare)

C2: 6 disks (4D + 2P) (no hotspare)

If my drives are 500GB, then what is going to be my actual "usable" space?

FAS2020 - 12 disks, Aggregates?

When you take right-sizing and WAFL overhead into account I'd guess you'll have approx 1.6TB usable space per controller, minus any aggregate/volume snap reserve configured.

Running without hot spares is contrary to NetApp recommendations but I can see your point as you don't have a huge number of drives to play with. Make sure your monitoring is on the ball so you can replace any failed disks quickly, multiple failures per RAID group are rare but they do happen....