ONTAP Hardware

Efficiently allocating new disks between controllers, and RAID Group size?

strattonfinance
5,459 Views

Hi all,

We're in the process of adding some extra disks to our FAS2050A and I'm having a tough time figuring out the "best" way to allocate these disks (and hand-in-hand with this, choose the "best" RAID group sizes).

Current system is a 2050A with 20 x 144GB disks - 16 data/parity + 1 spare presently allocated to controller #1 (currently out "active" controller), 2 data/parity + 1 spare to controller #2 (currently acting only as a "passive" controller).

Additional disks are 28 x 144GB in two DS14MK4 shelves.

As noted above, I'm trying to plan how we spread these disks across the two controllers, taking into account both our current setup and also possible future expansion.

A few possible possible configs I've come up with - all RGs would use RAID-DP:

Option 1:

Controller #1:

Data: 16 disks (internal shelf) in one 16-disk RG

Spare: 2 disks (internal shelf)

Controller #2:

Data: 28 disks (DS14MK4 shelves) in two 14-disk RGs or one 28-disk RG

Spare: 2 disks (internal shelf)

Option 2:


Controller #1:

Data: 18 disks (internal shelf) in one 18-disk RG

Spare: 2 disks (internal shelf)

Controller #2:

Data: 26 disks (DS14MK4 shelves) in two 13-disk RGs or one 26-disk RG

Spare: 2 disks (DS14MK4 shelves)

Some questions arising out of this:

1) Any other configurations I should consider?

2) Thoughts on RG size for controller #2 - 13/14 disks in below the default, 26/28 is approaching maximum. Obviously the choice is a trade-off of performance & efficiency vs reliability, but can anyone offer some insight into the "best" choice here?

3) Can someone clarify something for me to do with RG sizes and performance please? If we were comparing performance of 2 x 14 disk RGs vs 1 x 28 disk RG, would the former (ignoring parity disks) only give us the performance of a 14-disk stripe (the size of each RG), or would it give us the performance of a 28-disk stripe (the size of the two RGs combined)?

4) The reason for considering option 1, where controller #1 only uses 16 data/parity disks and controller #2 has its spares in the internal shelf, is to do with future expansion.

Assuming we were to add another 28 disks in the future (2 more DS14MK4's) and wanted to add these disks to existing aggregates then I believe option #1 would be better. By adding a whole shelf to controller #1 and reclaiming the two internal-shelf spares assigned to controller #2 we could create a 2 x 16 disk RG aggregate with 2 internal-shelf spares on controller #1, giving us balanced RG sizes and following the best practice recommendation of assinging whole shelves to a single controller. Likewise, the other additional shelf could be added to controller #2 and would provide for 2 replacement spares + expanding controller #2's RGs to 2 x 20 or 1 x 28 and 1 x 12. We would be mixing SAS and FC in one of the controller #1 RGs though... I know we can do this, but not sure if it has performance implications?

On the flip-side, I think option 2 is a better design for us now - it maintains the best-practice recommendation of assigning whole shelves to a single controller with the present setup, and does a slightly better job of balancing the number of disks per controller. It does make further expansion - e.g. adding another 28 disks - tricker though, as we'd need to either go for unbalanced RGs on controller #1 (18 + 14) or break the assign-whole-shelves-to-single-controllers best practice recommendation (2x18 disk RG on controller #1, with the second RG taking a whole shelf + part of another).

I'm guessing the "right" choice here probably comes down to our specific environment + timeframe for additional expansion, but does anyone have any other thoughts on the matter? Or am I over-thinking this, with there being very little real-world difference between the two options?

Thanks all, really appreciate any advice that is forthcoming.

Cheers,

Matt

7 REPLIES 7

ekashpureff
5,459 Views

Matt -

Option #3 ?

Ctrlr1/Ctrlr2:

24 disks each.

22 disks each in 22 disk raid groups.

Disks split evenly across all shelves.

Use the 'disk replace' command to juggle out the existing disks to the new shelves.

Consider the hot spot created when adding the few disks to the existing aggr - maybe do a 'reallocate'.


I hope this response has been helpful to you.

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
Fastlane NetApp Instructor and Independent Consultant
http://www.fastlaneus.com/ http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

strattonfinance
5,459 Views

Hi Eugene,

> Option #3 ?

> Ctrlr1/Ctrlr2:

> 24 disks each.

> 22 disks each in 22 disk raid groups.

> Disks split evenly across all shelves.

> Use the 'disk replace' command to juggle out the existing disks to the new shelves.

So what you are proposing is that we split all shelves evenly between each controller - i.e., half of internal shelf, half of FC shelf #1 and half of FC shelf #2 on controller #1, and the other half of each of these shelves on controller #2?

If so, isn't this violating one of the best practice recommendations re allocating whole shelves to a single controller?

> Consider the hot spot created when adding the few disks to the existing aggr - maybe do a 'reallocate'.

Yes, we will do a reallocate if we add any disks to the existing aggregate on controller #1 - thanks though.

Thanks for the input.

Cheers,

Matt

shane_bradley
5,459 Views

I wouldnt be too fussed about where the disks are physically located, as soon as you start getting failures they will start moving around anyways. What are you trying to achieve? given your shelf count you're not going to buy any more redundancy.

As for RG sizes i would stick to smaller ones if you can (i.e. 12-18 disk) It seems to be the sweet spot from what i've seen.

The raid group layout and the disk assignment is dependant on how you want to use the filers. i would try and layout my data/disk equally between both controllers it allows you to better utilise your storage investment.

Option 2 seems fine, i would proabbly be tempted to use 2x13disk RG's rather than 1x26 but thats just me. I also wouldnt be too fussed about what disks were being used where assuming the SAS and FC are both 15K drives you'll be fine.

strattonfinance
5,459 Views

Hi Shane,

> I wouldnt be too fussed about  where the disks are physically located, as soon as you start getting  failures they will start moving around anyways. What are you trying to  achieve? given your shelf count you're not going to buy any more  redundancy.

I was under the impression that allocating whole shelves to a controller (so all disks in each shelf are owned by only one controller) was best practive, and was simply trying to folllow that. Is that incorrect? Or just not an "important" best practice?

> As for RG sizes i would stick to smaller ones if you can (i.e. 12-18 disk) It seems to be the sweet spot from what i've seen.

> The  raid group layout and the disk assignment is dependant on how you want  to use the filers. i would try and layout my data/disk equally between  both controllers it allows you to better utilise your storage  investment.

> Option  2 seems fine, i would proabbly be tempted to use 2x13disk RG's rather  than 1x26 but thats just me. I also wouldnt be too fussed about what  disks were being used where assuming the SAS and FC are both 15K drives  you'll be fine.

OK, so even going so far as to mix SAS and FC (both are 15k RPM) in the same RG is also OK? I know NetApp allows it, but wasn't sure if it had any performance (or other) implications.
Also, are you possibly able to clarify the following from my OP for me?
Can someone clarify something for me to do with RG sizes and  performance please? If we were comparing performance of 2 x 14 disk RGs  vs 1 x 28 disk RG, would the former (ignoring parity disks) only give us  the performance of a 14-disk stripe (the size of each RG), or would it  give us the performance of a 28-disk stripe (the size of the two RGs  combined)?

Thanks for your input, greatly appreciated.

Cheers,

Matt

shane_bradley
5,459 Views

Allocating full shelves is probably best practice and if you can i would suggest you do,  But i've seen and done  1/2 shelves before with no decernable issues. If best practice says do it X way, then all ways try and do it that way.

FC and SAS drives are effectively the same physical spindle in most cases so i doubt there would be any problems.

With your RG question, in one you've  got 2x12 disk stripes and the other you've got 1x 26 disk so theortically the single stripe should have the greatest theoretical throughput. There is a lot more to the discussion that that, failure domains is one of them the chance of a 3 in 28 failure is higher than the chance the chance of 2x 3 in 14.

strattonfinance
5,459 Views

Hi Shane,

> Allocating full shelves is  probably best practice and if you can i would suggest you do,  But i've  seen and done  1/2 shelves before with no decernable issues. If best  practice says do it X way, then all ways try and do it that way.

> FC and SAS drives are effectively the same physical spindle in most cases so i doubt there would be any problems.

OK, thanks for the clarifications.

> With  your RG question, in one you've  got 2x12 disk stripes and the other  you've got 1x 26 disk so theortically the single stripe should have the  greatest theoretical throughput. There is a lot more to the discussion  that that, failure domains is one of them the chance of a 3 in 28  failure is higher than the chance the chance of 2x 3 in 14.

Thanks for the extra info, but I'm still not clear on the performance aspects of this - sorry. With, e.g., 2 x 13 disk stripes, are we losing half of our performance compared to 1 x 26 disk stripe? Or will both options deliver essentially the same performance?

Thanks,
Matt

shane_bradley
5,459 Views

with 1x 26 disk raid group you'd you'd have more available bandwidth (So faster) given you'd have 24 data drives in the 2x14 RG vs 26 in the 1x28 RG i.e. 2 more drives to spread i/o

Personally i like  to keep them around 16 just seems the most effective use of disk, while giving you a decent reliablity.

So basically theoretically 1x28 RG should be faster, but i doubt it would be noticbly fast enough to justify the increased risks.

Public