RAID Groups – Aggregates - FlexVols


I am new to netapp so apologies if these are basic questions.

With RAID DP you create multiple RAID groups of 14 disks, an aggregates is created on top of these groups and then flexvols are created within this aggregate?

If I wanted to build an array with 100TB usable capacity (an example) would I create many RAID DP groups (max disk 14/16?) with many aggregates (max size 16TB?) then create lots of flexvols within these aggregates then give hosts access to these flexvols?

What is the purpose of the root aggregate and how and when should these be created? # of disks?

Do you need a hot spare per tray or can you have global?

Also, can anyone point me in the direction of good documentation, particularly with respect to RAID, aggregates, flexvols and how they are put together with large arrays

.Many thanks for any help.

Re: RAID Groups – Aggregates - FlexVols

I'll answer a few of your questions...

Yes you have disks, then RAID groups (14 for SATA, 16 for FC by default, this can be tweaked), then aggregates, then volumes, then data (or LUNs, then data).

If you want 100TB usable then yes, you do need to carve it up into seperate FlexVols in order to provide it all. ONTAP 8 introduces 64bit aggregates, so the 16TB cap is raised according to the hardware you use.

The root aggregate is simply whatever aggregate contains vol0, this is the system volume that contains the main OS. If you have a system as large as you are saying, you would seperate out the root aggregate to have its own disks. If you have a smaller system where disk is at a premium (2000 series), it can be within a normal data aggregate.

Remember that dedupe (A-SIS), clones and thin provisioning can allow you to allocate and ddress more storage than you technically have.

Your hot spares are global per controller, per disk type. So in a cluster you need a minimum of 2 hot spares per node. If you have SATA and FC disks, you'll need 2 of each. There is a table that shows how many spares to keep according to the number of disks you have in total, but I can't recall exactly where (hopefully another member will be able to help you out).

Have a look around and look at the TR's. There's a couple of best practice guides. If you have a NOW account (, search the production documentation libraries, these will show you the recommended maximums and minimums of a configuration.

Re: RAID Groups – Aggregates - FlexVols

You understand the storage stack pretty well.

If you're running ONTAP 7, you do have an aggregate limit of 16TB, so you'd have to deal with that.  Starting in ONTAP 8 (which is released now), that limit is raised (how high depends on your model).

As far as RAID group size.  The default is typically 16 (14+2).  This can be raised, especially if you have SAS/FC disks (not so much on SATA), but you have to balance rebuild times vs RAID overhead.

While RAID-DP protects you during the rebuild of up to 2 disks in a RG, there is a performance penalty while the rebuild is going on so getting done with it is a good thing.  My advise is to stick with the default unless there's a good reason to change it.

The root aggregate is just the aggregate that holds your root volume.  Some people want it separate, I don't think that's really necessary (and, yes, I understand the arguments so there's no need to try and convince me).  Ultimately, it's up to you.  Personally for the very small chance of having a dedicated root volume having a benefit, I don't find it worth the lost efficiency.  But that's just me and my 10+ years at NetApp talking.  Feel free to follow your gut on that one.

Hot spares are global.  There's no way to dedicate a spare to a shelf.  That being said, you want at least one hot spare of any given drive type and size on your system.  As you increase the # of those disks, it's probably not a bad idea to keep a couple more.  There's no set formula.  I've seen people use ratios like 1 spare for every 56 or even 84.  Again, it depends on your environment.  I will say it's not a bad idea if possible to have 2 spares of each type if you want maintaince center to work, which is kind of cool.  But on smaller configs, that's probably not practical.

I hope that gets you started.

Re: RAID Groups – Aggregates - FlexVols

Adam, just a small correction. Minimum recommended hot spares is 2 per disk type. This is so you get the disk maintenance centre which allows repairing of software failures on disks. Also there is a defined recommendation for the number of hot spares per number of disk shelves. I'll have to try dig out the table for you...

Re: RAID Groups – Aggregates - FlexVols

Sorry to double post...

It's in the Storage Best Practice and Resilience Guide - Page 13

nsitps1976: This is probably a good doc for you to have a read through also.

Re: RAID Groups – Aggregates - FlexVols

This is very helpful information, thank you both.

Can I ask one more question? - Disregarding thin, dedup etc.... If a customer wanted say 20TB usable (single disk type for simplicity, 600GB SAS) how would you build a netapp system - Is the below correct..

- Create sufficient number of RGs to allow 20TB usable:

- 600GB - 10% (overhead) = 540GB

- 20TB / 540GB = 38 data disks

- 38 / 12 disks = 3.2 ------------ This gives me 3 RGs of 12 data disks rounded down (I could add a few more disks to account for the 0.2 if needed)

- 3 RGs need an extra 6 disks for double parity (2 per RG)

- 2 or 4 spare disks, 1 or 2 per controller

- Model of array and number of trays to accommodate this number of disks


Re: RAID Groups – Aggregates - FlexVols

First thing to clarify, remember that a cluster has it's own storage, so if you split 20TB across a cluster, you'd probably have 10TB on each controller. Each controller would have it's own aggregate, RAID group, hot spares, and so on. So when you mention having 3 raid groups, you may actually configure this with 4 and split the disks down the middle, or have a biase and have one controller with 2 RGs, the other with 1.

The overheads are a little bit more than that. Right size of a disk usually takes 5-10% (depends on the type), 10% overhead for WAFL, 5% overhead for aggregate snapshots (this allows for recovering volumes and other stuff). So you 600GB SAS disk would be closer to 450GB usable (that's not an exact figure, I haven't got the calc in front of me). So you're looking at 45 data disks, or 2 shelves of DS4243 (24 disks per shelf) would be a little tight. You may want 3 shelves to accomodate snapshots, clones and additional growth.

Any controller model in the NetApp FAS range will provide this for you, clustered or otherwise.

NetApp internal and any reseller has access to sizing tools which would allow someone to easily spec this up for you in much more detail. These aren't released to the public orend-users as they require a little training to know the limitations and considerations of sizing, and NetApp obviously don't want to be held liable for something getting sized wrong (you get lots of stories of that happening in the industry!)!

You can also optimise the RAID groups, and we tend to do this on larger systems or when you are going to max an aggregate. For instance if you end up with 2 RAID groups, one of 16, one of 14, it would be better to have a RAID group size of 15. On some occasions you can have the default RAID group if 16, but be left with 5 disks at the end, and you don't really want a RAID group that small. So you can tweak the RAID group sizing to even out all the RAID groups. This is probably a bit further down the line than what you are asking, but I think it's useful to keep in mind.

Re: RAID Groups – Aggregates - FlexVols

I forgot at add:

- Create 2 10TB aggrs or one 20TB assuming the system can handle this

- Create flexvols on this/these arrgs

Re: RAID Groups – Aggregates - FlexVols

A cluster is basically an array with 2 controllers right? active active?

How and when are disks, RGs and aggregates assigned to controllers?

Also, which controller should the root be created, or does this not matter?

Re: RAID Groups – Aggregates - FlexVols

Not really. The cluster is 2 arrays, active/active. They can control eachothers disks at the point of a failover, but the rest of the time they are seen and addressed as 2 independent storage arrays (with some slight exceptions with FC attached hosts). Sort of similar to how you might configure an active/active Microsoft cluster, each can failover to eachother, but each has it's own storage and acts independently when not failing over.

Because of this, each needs it's own root volume (and as such a root aggregate). Don't get caught up with the idea of a root aggregate too much, for a lot of systems this is simply a root volume within the normal data aggregate.

You'll really want to assign the disks and create all the aggregates right at the start, before any data hits the system. Obviously this is all during the planning and scoping stages and should be defined before you even take delivery of the NetApp You can if absolutely necessary swap spare disks between the controllers if you need to, it's all software ownership so you don't need to worry about how they are cabled to each controller, so long as it's cabled correctly! However remember that an aggregate cannot be shrunk, so if you have mis-assigned the disks, you may need to remove the aggregate and start again. So planning is key!