2010-08-22 05:46 AM
I know the benefits of have a single aggregate and maximizing the number of disk for capacity and performance. Now I recon you can create multiple raid groups per aggregate depending on the RG size and the # of disks in aggregate.(ndisk@size).
Solved! SEE THE SOLUTION
2010-08-22 09:01 AM
Re 1: Typically when your aggregate includes more drives than recommended RG size
Re 2: Each RG has its 'full' set of drives, i.e. some data drives, one parity per RG & one dual-parity per RG (providing this is a RAID-DP setup). Only hot-spares are shared amongst multiple RGs (& in fact across multiple aggregates)
Re 3: If you create an aggregate & then create a volume (even a tiny one), it will get spread across all spindles in the aggregate, i.e. it will span multiple RGs (if there is more than one RG in the aggregate)
2010-08-22 09:07 AM
My two cents...
What reason would you want to create more than 1 raid group in an an aggr?
To grow an aggregate, we often don't have a choice when we hit the maximum raid group size. To get more aggregate I/O (more data drives) we have to create multiple raid groups to get more spindles int the aggregate. Also, for resiliency and faster rebuild times per raid group to not have them max size (less of an issue with rapid raid recovery now). We try to create raid groups all within the same size or within 1 drive of each other for better performance. For example, if you create a maximum size 7.3 RAID_DP 1TB aggregate with 23 drives... with "aggr create aggr_1t 23" it will create two raid groups. rg0=12D+2P , rg1=7D+2P ... we often will run "aggr create aggr_1t -r 12 23" which will create rg0=10D+2P , rg1=9D+2P ... which is the same number of data and parity drives, but gives a better performance layout with more even sized raid groups. So, I don't like using the default rg size for aggregates and figure out the even layout then create the raid size accordingly. The default sata raid group size is 14 drives (12d+2P) with a max of 16 and fc/sas default is 16 (14d+2p) with a max of 28.
If multiple RG's do they all have their specifc raid disks e.g raid-dp (2 disks) or they all share the same raid disks that the aggr contians for all RG's in the aggr to share
Each raid group is independent for raid calculations. Writes however write across all disks in the aggregate. When a volume writes (cp event), it writes to all drives. If we had 2 raid groups of 5D+2P or a single raid group of 10D+2P, we'd have the same number of data drives (10) for spindle I/O... and writes would be across all 10 data drives. While there is some cpu for each raid group, we don't see a noticeable performance hit having multiple raid groups. A Flexible Volume is written across all drives in the aggregate.
If there are mulitple raid groups in the aggr, and I create multiple volumes, I can see what raid groups they go on by checking vol status, but does it fill up (stripe) first raid group and then when full jump on to next raid group and start stripping there?
It write across all raid groups for better performance and does not tie to a single raid group. An exception is if you have a 75% full aggregate, then add a new raid group and create a new flexvol... new writes would write to new drives (but no way to tell which drives the flexvol reside on other than you need all drives in the aggregate), but you can look at perfstat or statit to see the performance impact of less data drives (may not be high enough to be an issue)... or run reallocate -p to phyiscally reallocate all flexvols in the aggregate evenly.
I like to figure out the optimal raid group size based on the max aggr size then let ontap create as many raid groups as it needs...and try to max out aggregate sizes before creating a new aggr unless an edge case where I need to separate I/O across multiple aggregates or if there is a requirement for separate containers for resiliency or security.