Network and Storage Protocols

RAID group size

stevecoopat
4,627 Views

We currently have a FAS270c. It will be running ONTAP v7.2.6.1. by the time we add new disks. Our current RAID DP group is 15 disks, with 1 global spare. We just purchased 4 more disks. In my research I found that I could increase the raidgroup as large as 28. So my question is this:

Is there any downsides/snafu's to running our aggregate on a single raidgroup of 19 ( or 28 for that matter ) disks?

We do not run databases or Exchange on this filer, its solely for CIFS shares.

Thanks for any help on this subject!!

9 REPLIES 9

BrendonHiggins
4,627 Views

Hi, welcome to the community.

You have not yet said what size disks you have installed.

This thread has loads of good information on raid group sizes.

http://communities.netapp.com/message/6078

Hope it helps

Bren

stevecoopat
4,627 Views

300GB drives

Thank you for the link, I will check it out.

amiller_1
4,627 Views

No inherent issue with taking an FC RAID group up to 20 disks -- basically with larger RAID groups it's a trade-off between space utilization (better if larger RG's) and rebuild times (longer if larger RG's).

I'm generally quite comfortable with FC RAID groups up to 20 disks as the rebuild time is still pretty good. What I might even recommend is just adding 3 disks and going up to (2) hot spares as that will enable Maintenance Center -- see tip #2 here for details on what that is.

http://partners.netapp.com/go/techontap/matl/storage_resiliency.html

radek_kubka
4,627 Views
What I might even recommend is just adding 3 disks and going up to (2) hot spares as that will enable Maintenance Center

Hi Andrew,

In practical terms - do you think it is worth the hassle (on a small system) of convincing people to have even less usable capacity for the sake of Maintenance Center?

Say we look at FAS2050A with internal drives only: 10 drives per head minus 2 for parity, minus 2 hot-spares = 6 data disks left (12 in total across both heads).

If we run that scenario trough Synergy with 20x 450GB SAS drives & leave the default snap reserve, we will get:

Marketing Raw - 9TB

Net Usable - 3.8TB (base 10) or 3.4TB (base 2, i.e. something which will be genuinely seen from the host side)

Tough sell...

Regards,

Radek

amiller_1
4,627 Views

So....it really depends on the customer.

If they're not too technical/there's more important things to discuss, I'll just recommend 1 hot spare if only 1 shelf or just 2020/2040 internal disks (i.e. 12 or 14 disks). Once there's a 2nd shelf added, I'll usually recommend 2 hot spares noting that they can always add the 2nd hot spare later.

But.....if they are more technical and/or there's benefit in helping them understand all the "NetApp-y goodness under the covers" I'll wander into Maintenance Center a bit.

Side-note: I personally really try to stay away from internal-disk-only 20x0 HA configurations.....the usable space calcs just end up getting really painful to explain/justify. Unless the customer has a specific need for HA, I'll often go more towards a single head with 4 Hour support. Sharing hot spares between heads (dynamic software disk ownership would seem to be the most likely way to do it) would be one way to help there a bit.

radek_kubka
4,627 Views

Sharing hot spares between heads (dynamic software disk ownership would seem to be the most likely way to do it) would be one way to help there a bit.

Just double-checking: this actually is not doable, is it?

I was always scratching my head what is stopping NetApp folks from implementing this? Increased efficiency on smaller systems would be substantial.

Yes, probably everyone prefers to be busy increasing spindle count beyond 2000 disks on FAS6080, rather than being bothered how to squeeze another TB or two from poor FAS2050! 😉

Regards,
Radek

amiller_1
4,627 Views

Nope....not currently doable (sharing hot spares between heads)...would be wonderful for the smaller systems though (would help lower the # of questions over "this is all the space I have?").

And...I'm not sure why it's not implemented (I don't really doubt there are technical difficulties but for the channel where I work/20x0 systems it would help a LOT).

stevecoopat
4,627 Views

Thank you Andrew. I will read through that doc as well.

amiller_1
4,627 Views

And.....quite welcome.

Public