ONTAP Hardware

Raid group size recommendation

janjacobbakker

Hi Folks,

I'm trying to design a storage solution.

A FAS3020 with 3 shelves full with 42x 300GB FC disks.

Default the Raid Group size is 16 (14 + 2 spare) de max raid group size is 28.

Does anyone have some best practice information? The NOW site hasn't much info on that.

Kind regards

71 REPLIES 71

JDMARINOALR

Hi There...

wondering if anyone has updated data with 900GB SAS drives.  I am looking to create a 23 disk aggregate on a 3210, running 8.0.3.From the aggregate max(50tbs), this should be a good config. I don't really want to waste 4 disks in this config to parity.

thanks!

radek_kubka

Hi John,

There are no massive changes re RG size recommendations:

Theoretically it is possible to have a RAID-DP aggregate with 23x (or even 28x) 900GB drives in one RG - however, the best practice suggests to keep RG size no bigger than 20.

Regards,
Radek

ventap1111

I'll try to do your question justice!   I was looking at this from two aspects: Performance, and long-term capacity.   While the system does indeed have 42 disks today, tomorrow it may have a need for additional capacity. So, by choosing a 15disk raid-group, I'm assuring myself not only maximum efficient RG design, I'm also committing to the maximum amount of space.

ERIC_TSYS

Slighty off topic but nevertheless important: I have read in this thread and had several conversations with techies that recommend ending up with 1 spare disk whilst using

RAID-DP.

I dont get it, why cater for RAID-DP if you re not going to cater for enough spares?

I have seen/heard this statement so many times now that I am concerned I ve missed out on something, can someone explain the logic please?

Cheers,

Eric

Hi Eric,

What you not sure about…

“Good practice” is 2 spare drives, per drive type, per controller…however technically it’s not a necessity to do it… the smaller the setup however the more this practice is compromised for capacity over “ultra” resilience…

But if you’re not sure drop us a note back and I’ll advise if I can…

Hi Eric,

The idea is that the system has NBD or 4 Hrs Parts delivery and in the event that a second disks fails and spare disk will have arrived at your door.

Best Practice is to have 2 or more spares, but for smaller systems and clients, generally they leave 1 spares, to get the most usuable storage from the disks they have available. Without compramising data protection.

Just a question are you based at TSYS is cyprus?

ERIC_TSYS

hi Paul,

I just struggle with the logic of having 2 parity disks and not having enough disks to cater for the protection you ve put in place.

I know that for smaller systems it may come to this but in this case there are heaps of disks to allow for 2 spares.

I agree that 4 hours or NBD CAN be good enough but if a disk failed I d like to have it rebuilt and still have 1 spare disk

whilst the new disk is on its way. Too many times I ve seen dissk fail out of hours, delayed deliveries, disks fail on weekends

when nobody is there to fix the issue etc. In cases like these you re exposed with 1 spare disk.

Cheers,

Eric based in UK, I wish I was in Cyprus!

radek_kubka
I just struggle with the logic of having 2 parity disks and not having enough disks to cater for the protection you ve put in place.

Well, you can reverse the logic as well: why 'lose' 2 disks for parity & then another 2 for spares (per controller, per disk type)? What's the likelihood of losing simultaneously 2 drives in the aggregate & one hot spare?

As mentioned already in this thread, 2 hot spares are required for so called Maintenance Center (http://media.netapp.com/documents/tr-3437.pdf, p10), but on smaller systems it may be deemed to be a luxury

Regards,

Radek

ERIC_TSYS

I agree it can be seen as a luxury on smaller system but then again.. smaller system or not: I d suggest to you data is what is important, not what system it sits on.

Your most important data could be sitting on a 10GB file system rather than a 10TB filesystem. Hence what FAS model data is sitting on is a bit of a moot point for me.

Also, I dont see raid protection as a cost, its an investment agains loss of your most important asset.

Lastly going back to my example: a disk fails on a friday night, you ve got 1 spare disk, it rebuilds. Now you have no spare over the weekend. how does that sit with you?

I know where it sits with me, for the sake of 1 disk its not worth the risk.

I guess from your answers its a matter of opinion/risk assesment rather than a technical issue. Thats fine, no worries.

Cheers,

Eric

Hi Eric,

You’re spot on…it’s a balance between needs of capacity/performance versus risk aversion…

And as you say for the cost of one disk it may not be worth the risk…however for some people especially in the 2020 kind of end of the scale, they need the capacity equally…

But you’re right it’s a matter of opinion really and as you said based on risk assessment…

The lovely flexibility of NetApp ☺

Sadly Paul also based in the UK…it’s the other guy who’s in Cyprus!!!

jwhite

Hello,

NetApp revised the best practice sparing policy last year to make it more logical and applicable to the range of configurations we see in our customer base.  There is only a single configuration in which we recommend using only a single spare drive - and that is a FAS2000 series (aka "entry" level systems) that is using only internal drives (no external storage attached).  As some have pointed out already, the number of spares to keep on hand varies depending on what you are concerned about with the configuration.  Here is the updated spares policy - the "official" NetApp best practice sparing policy:

------------------------------------------------------------------------------------------------------------------

HOW MANY HOT SPARES SHOULD I KEEP IN MY STORAGE CONFIGURATION?

Recommendations for spares vary by configuration and situation. In the past, NetApp has based spares recommendations strictly on the number of drives attached to a system. This is certainly an important factor, but it's not the only consideration. NetApp storage systems are deployed in a wide range of configurations. This warrants defining more than a single approach to determining the appropriate number of spares to maintain in your storage configuration.

Depending on the requirements of your storage configuration, you can choose to tune your spares policy toward one of the following approaches:

  • Minimum spares: In configurations where drive capacity utilization is a key concern, you might want to use only the minimum number of spares. This option allows you to survive the most basic failures. If multiple failures occur, it might be necessary to manually intervene to make sure of continued data integrity.
  • Balanced spares: This configuration is the middle ground between minimum and maximum. It assumes that you will not encounter the worst-case scenario and provides sufficient spares to handle most failure scenarios.
  • Maximum spares: This option makes sure that enough spares are on hand to handle a failure situation that demands the maximum number of spares that could be consumed by the system at a single time. The term maximum doesn't mean that the system might not operate with more than this recommended number of spares. You can always add additional hot spares within the spindle limits as you deem appropriate.

In the table below, consider each "approach" as the starting number of spares that is then modified by the "special considerations" as appropriate.

For RAID-DP configurations, consult the following table for the recommended number of spares.

Recommended Number of Spares

Minimum

Balanced

Maximum

Two per Controller

Four per Controller

Six per Controller

Special Considerations

Entry platforms

Entry-level platforms using only internal drives can be reduced to using a minimum of one hot spare.

RAID groups

Systems containing only a single RAID group do not warrant maintaining more than two hot spares for the system.

Maintenance Center

Maintenance Center requires a minimum of two spares to be present in the system.

>48-hour lead time

For remotely located systems, there is an increased chance that they might encounter multiple failures and completed reconstructions before manual intervention can occur. Spares recommendations should be doubled for these systems.

>1,200 drives

For systems using more than 1,200 drives, an additional two hot spares should be added to the recommendations for all three approaches.

<300 drives

For systems using less than 300 drives, you can reduce spares recommendations for a balanced or maximum approach by two.

Selecting any one of the three approaches (minimum, balanced, or maximum) is considered to be the best practice recommendation within the scope of your system requirements. The majority of storage architects will probably choose the balanced approach, although customers who are extremely sensitive to data integrity might warrant taking a maximum spares approach. Given that entry platforms use small numbers of drives, a minimum spares approach would be reasonable for those configurations.

Additional notes about hot spares:

  • Spares recommendations are for each drive type installed in the system.
  • Larger capacity drives can serve as spares for smaller capacity drives (they will be downsized).
  • Slower drives replacing faster drives of the same type affect RAID group and aggregate performance. For example, if a 10k rpm SAS drive (DS2246) replaces a 15k rpm SAS drive (DS4243), this results in a suboptimal configuration.
  • Although FC and SAS drives are equivalent from a performance perspective, the resiliency features of the storage shelves in which they are offered are very different. By default, Data ONTAP uses FC and SAS drives interchangeably. This can be prevented by setting the RAID option raid.disktype.enable on.

NetApp does not discourage administrators from keeping cold spares on hand. NetApp recommends removing a failed drive from a system as soon as possible, and keeping cold spares on hand can speed the replacement process for those failed drives. However, cold spares are not a replacement for keeping hot spares installed in a system.

Cold spares can replace a failed part (speeding the return/replace process), but hot spares serve a different purpose: to respond in real time to drive failures by providing a target drive for RAID reconstruction or rapid RAID recovery actions. It's hard to imagine an administrator running into a lab to plug in a cold spare when a drive fails. Cold spares are also at greater risk of being “dead on replacement,” as drives are subjected to the increased possibility of physical damage when not installed in a system. For example, handling damage from electrostatic discharge can occur when retrieving a drive to install in a system.

Given the different purpose of cold spares versus hot spares, you should never consider cold spares as a substitute for maintaining hot spares in your storage configuration.

The RAID option raid.min_spare_count can be used to specify the minimum number of spares that should be available in the system. This is effective for Maintenance Center users, because when set to the value 2 it notifies the administrator if the system falls out of Maintenance Center compliance. NetApp recommends setting this value to the resulting number of spares that you should maintain for your system (based on this spares policy) so that the system notifies you when you have fallen below the recommended number of spares.

aflores_ibm

I would add some details to the information provided below. You may set the

"raid.min_spare_count" to 0, 1, 2 or more. but if you do so, I'd recommend

changing the following as well: "raid.timeout". This option is usually set

to 24 which represent the numbers of hours before the system preemptively

auto-shutdown once the system no longer meets the raid /disk options set.

In other words, if your number of available spares[aggr status -s|vol

status -s] go below the number of required spares

then you will have your system running in degraded mode until you meet you

the stated requirements. If you are unable to satisfy those requirements

before the time limit has passed, the system will

auto-shutdown to prevent any potential data loss.

That been said, you should calculate your own requirements based on:

Type of disks

Type of RAID

Size of RAID

Data Risk Assessment::

Can the system suffer a shutdown without impact to business:

Yes :: how long? at what time of the day/night?

No :: -> critical system

What type of hardware warranty and support exist or need to be setup: 24/7

- 4 hours || 8am - 5pm, business day only [might still be

critical]

These are only high overview. At this point, the risk(s) would have to be

identified and a series of contingencies provide for review and approval by

the stakeholders based on the initial requirements stated by the

stakeholders.

Hope this helps as well.

Regards,

Allain Flores

Storage Consultant

Enterprise Storage Management - CDC

IBM Global Services

This transmission may contain information that is privileged, confidential

and/or exempt from disclosure under applicable law. If you are not the

intended recipient, you are hereby notified that any disclosure, copying,

distribution, or use of the information contained herein (including any

reliance thereon) is STRICTLY PROHIBITED. If you received this transmission

in error, please immediately contact the sender and destroy the material in

its entirety, whether in electronic or hard copy format.

Please consider the environment before printing this e-mail or any other

document

From: jwhite <xdl-communities@communities.netapp.com>

To: Allain Flores/Markham/Contr/IBM@IBMCA

Date: 09/28/2011 12:54 PM

Subject: "Raid group size recommendation" [NetApp Community > Products &

Solutions]

Re: Raid group size recommendation

created by jwhite in Products & Solutions - View the full discussion

Hello,

NetApp revised the best practice sparing policy last year to make it more

logical and applicable to the range of configurations we see in our

customer base. There is only a single configuration in which we recommend

using only a single spare drive - and that is a FAS2000 series (aka

"entry" level systems) that is using only internal drives (no external

storage attached). As some have pointed out already, the number of spares

to keep on hand varies depending on what you are concerned about with the

configuration. Here is the updated spares policy - the "official" NetApp

best practice sparing policy:

aborzenkov
In other words, if your number of available spares[aggr status -s|vol

status -s] go below the number of required spares then you will have your system running in degraded mode until you meet you the stated requirements. If you are unable to satisfy those requirements before the time limit has passed, the system will auto-shutdown to prevent any potential data loss.

Sorry, but this is incorrect. Degraded mode means - raid group without protection (i.e. single disk missing in RAID4 or two disks missing in RAID_DP). Number of spare disks does not contribute to degraded status, and system will not shutdown if number of spares is low.

aflores_ibm

Sorry,

Got sidetrack on projects.

Clarification on raid.timeout from the command manual:

raid.timeout

Sets the time, in hours, that the system will run after a single disk

failure in a RAID4 group or a two disk failure in a RAID-DP group has

caused the system to go into degraded mode or double degraded mode

respectively. The default is 24, the minimum acceptable value is 0 and the

largest acceptable value is 4,294,967,295. If the raid.timeout option is

specified when the system is in degraded mode or in double degraded mode,

the timeout is set to the value specified and the timeout is restarted. If

the value specified is 0, automatic system shutdown is disabled.

I'd bring attention to ht last sentence in regards to the automatic system

shutdown...

Regards,

Allain Flores

Storage Consultant

Enterprise Storage Management - CDC

IBM Global Services

This transmission may contain information that is privileged, confidential

and/or exempt from disclosure under applicable law. If you are not the

intended recipient, you are hereby notified that any disclosure, copying,

distribution, or use of the information contained herein (including any

reliance thereon) is STRICTLY PROHIBITED. If you received this transmission

in error, please immediately contact the sender and destroy the material in

its entirety, whether in electronic or hard copy format.

Please consider the environment before printing this e-mail or any other

document

From: aborzenkov <xdl-communities@communities.netapp.com>

To: Allain Flores/Markham/Contr/IBM@IBMCA

Date: 09/29/2011 12:18 AM

Subject: "Raid group size recommendation" [NetApp Community > Products &

Solutions]

Re: Raid group size recommendation

created by aborzenkov in Products & Solutions - View the full discussion

In other words, if your number of available spares[aggr status -s|

vol

status -s] go below the number of required spares then you will have

your system running in degraded mode until you meet you the stated

requirements. If you are unable to satisfy those requirements before

the time limit has passed, the system will auto-shutdown to prevent

any potential data loss.

Sorry, but this is incorrect. Degraded mode means - raid group without

protection (i.e. single disk missing in RAID4 or two disks missing in

RAID_DP). Number of spare disks does not contribute to degraded status,

and system will not shutdown if number of spares is low.

  1. of replies to the post:

Discussion thread has 64 replies. Click here to read all the replies.

Original Post:

Hi Folks, I'm trying to design a storage solution. A FAS3020 with 3

shelves full with 42x 300GB FC disks. Default the Raid Group size is 16

(14 + 2 spare) de max raid group size is 28. Does anyone have some best

practice information? The NOW site hasn't much info on that. Kind regards

Reply to this message by replying to this email -or- go to the message on

NetApp Community

Start a new discussion in Products & Solutions by email or at NetApp

Community

Stay Connected:

Facebook

Twitter

LinkedIn

YouTube

Community

© 2011 NetApp | Privacy Policy | Unsubscribe | Contact

Us

495 E. Java Drive, Sunnyvale, CA 94089 USA

jwhite

That is very true --- although the system will nag you about being below the minimum spare count it will not shut down the system because you don't have enough spares.  Degraded Mode describes a system that has one or more failed drives and decribes the fact that system resources are being used to repair the drive (be it a Rapid RAID Recovery or RAID reconstruction).  Degraded Aggregate describes an aggregate that contains one or more failed drives.  Degraded RAID group describes a RAID group that contains one or more failed drives.  That is the common usage of "Degraded" as it pertains to the storage subsystem today.

infinitiguy

are there any documents that talk about raid group/size with 64-bit aggregates and ontap 8?

We have a pair of 3160's running 7.3.4 that I'm planning on upgrading to ontap 8 and using 64-bit aggregates.  There is nothing on these filers now so I have no data to maintain or worry about during the upgrade process.

We have 5 shelves x 14 disks of 750gb sata and 3 shelves of 1tb sata.  We're going to grow the 5 shelves to 6 and the 3 shelves to 6 throughout this upgrade process (once data starts moving). 

What kind of limits will I be looking at, and with the much higher aggregate ceiling, what does that do to recommended/suggested raid groups.

Also, I'm pretty new to netapp in terms of doing disk layout/aggregate creation.  Do the raidgroups determine parity disk count (each raid group has 2 parity)?  What about spare disks, can they be added to any raidgroup in the loop?

jwhite

Hello,

There has been a lot of work put into aggregate and RG sizing recommendations at NetApp.  The documents that cover this information are currently NDA --- if you are covered by an NDA with NetApp you can request these documents from your account team (reference TR3838, the Storage Subsystem Configuration Guide or the Storage Subsystem Technical FAQ --- both cover the RAID group sizing policy).  For those who are not covered by NDA with NetApp --- since the RG sizing policy itself is not confidential I will paste the text below (from the FAQ --- hence the question/answer format).  This is the official NetApp position on RG sizing.

<policy>

SHOULD ALL AGGREGATES ALWAYS USE THE DEFAULT RAID GROUP SIZE?

The previous approach to RAID group and aggregate sizing was to use the default RAID group size. This no longer applies, because the breadth of storage configurations being addressed by NetApp products is more comprehensive than when the original sizing approach was determined. Sizing was also not such a big problem with only 32-bit aggregates, which are limited to 16TB. You can fit only so many drives and RAID groups into 16TB. The introduction of 64-bit aggregates delivers the capability for aggregates to contain a great number of drives and many more RAID groups than was possible before. This compounds the opportunity for future expansion as new versions of Data ONTAP® support larger and larger aggregates.

Aggregates do a very good job of masking the traditional performance implications that are associated with RAID group size. The primary point of this policy is not bound to performance concerns but rather to establishing a consistent approach to aggregate and RAID group sizing that:

  • Facilitates ease of aggregate and RAID group expansion
  • Establishes consistency across the RAID groups in the aggregate
  • Reduces parity tax to help maximize “usable” storage
  • Reduces CPU overhead associated with implementing additional RAID groups that might not be necessary
  • Considers both the time it takes to complete corrective actions and how that relates to actual reliability data available for the drives

These recommendations apply to aggregate and RAID group sizing for RAID-DP®. RAID-DP is the recommended RAID type to use for all NetApp storage configurations. In Data ONTAP 8.0.1, the maximum SATA RAID group size for RAID-DP has increased from 16 to 20.

For HDD (SATA, FC, and SAS) the recommended sizing approach is to establish a RAID group size that is within the range of 12 (10+2) to 20 (18+2); that achieves an even RAID group layout (all RAID groups contain the same number of drives). If multiple RAID group sizes achieve an even RAID group layout, NetApp recommends using the higher RAID group size value within the range. If drive deficiencies are unavoidable, as is sometimes the case, NetApp recommends that the aggregate should not be deficient by more than a number of drives equal to one less than the number of RAID groups. Otherwise you would just pick the next lowest RAID group size. Drive deficiencies should be distributed across RAID groups so that no single RAID group is deficient more than a single drive.

Given the added reliability of SAS and Fibre Channel (FC) drives, it might sometimes be justified to use a RAID group size that is as large as 24 (22+2) if this aligns better with physical drive count and storage shelf layout.

SSD is slightly different. The default RAID group size for SSD is 23 (21+2), and the maximum size is 28. For SSD aggregates and RAID groups, NetApp recommends using the largest RAID group size in the range of 20 (18+2) to 28 (26+2) that affords the most even RAID group layout, as with the HDD sizing approach.

</policy>

In addition to the policy, we publish tables for maximum size aggregates (which are also not confidential but are in the NDA docs):

<64-bit aggregates>

HOW MANY DRIVES CAN BE USED IN A MAXIMUM SIZE 64-BIT AGGREGATE?

64-bit aggregates are supported with Data ONTAP 8.0 and later. Each platform has different maximum aggregate capacities for 64-bit aggregates. The following recommendations are based on attempting to provide the optimal RAID group layout, as explained in the answer to the question “Should all aggregates always use the default RAID group size?” earlier.

The column descriptions for the following tables are as follows:

  • “Data Drives” is the number of data drives that fit within the maximum aggregate capacity (based on usable drive capacity).
  • “RG Size” is the recommended RAID group size to use for the configuration.
  • “Number of RGs” is the resulting number of RAID groups the aggregate will contain.
  • “Drive Def.” is the number of drives deficient the configuration is from achieving event RAID group layout.
  • “Data + Parity” is the total number of drives used for the aggregate configuration.

The top entries in the following tables show the recommendations for aggregate configurations that are using the maximum number of data drives. In many cases it is better to reduce data drives by a small number in order to achieve a better RAID group layout, as indicated by the bottom numbers, in parentheses.

64-Bit Aggregate Recommendations for FAS2040

Data ONTAP 8.0.x Maximum Aggregate Capacity 30TB

Capacity

Type

Data Drives

RG Size

Number of RGs

Drive Def.

Data + Parity

100GB

SSD

86

(84)

24

(23)

4

(4)

2

(0)

94

(92)

300GB

FC

115

(112)

15

(18)

9

(7)

2

(0)

133

(126)

450GB

75

(75)

17

(17)

5

(5)

0

(0)

85

(85)

600GB

56

(54)

16

(20)

4

(3)

0

(0)

64

(60)

300GB

SAS

115

(112)

15

(18)

9

(7)

2

(0)

133

(126)

450GB

75

(75)

17

(17)

5

(5)

0

(0)

85

(85)

600GB

56

(54)

16

(20)

4

(3)

0

(0)

64

(60)

500GB

SATA

74

(72)

17

(20)

5

(4)

1

(0)

84

(80)

1TB

37

(36)

15

(20)

3

(2)

2

(0)

43

(40)

2TB

18

(18)

20

(20)

1

(1)

0

(0)

20

(20)

64-Bit Aggregate Recommendations for FAS/V3040, 3140, 3070, 3160, 3210, and 3240

Data ONTAP 8.0.x Maximum Aggregate Capacity 50TB

Capacity

Type

Data Drives

RG Size

Number of RGs

Drive Def.

Data + Parity

100GB

SSD

86

(84)

24

(23)

4

(4)

2

(0)

94

(92)

300GB

FC

192

(192)

18

(18)

12

(12)

0

(0)

216

(216)

450GB

125

(119)

20

(19)

7

(7)

1

(0)

139

(133)

600GB

93

(90)

18

(20)

6

(5)

3

(0)

105

(100)

300GB

SAS

192

(192)

18

(18)

12

(12)

0

(0)

216

(216)

450GB

125

(119)

20

(19)

7

(7)

1

(0)

139

(133)

600GB

93

(90)

18

(20)

6

(5)

3

(0)

105

(100)

500GB

SATA

123

(119)

20

(19)

7

(7)

3

(0)

137

(133)

1TB

61

(60)

18

(17)

4

(4)

3

(0)

69

(68)

2TB

30

(30)

17

(17)

2

(2)

0

(0)

34

(34)

64-Bit Aggregate Recommendations for FAS/V3170, 3270, 6030, 6040, and 6210

Data ONTAP 8.0.x Maximum Aggregate Capacity 70TB

Capacity

Type

Data Drives

RG Size

Number of RGs

Drive Def.

Data + Parity

100GB

SSD

86

(84)

24

(23)

4

(4)

2

(0)

94

(92)

300GB

FC

269

(255)

20

(19)

15

(15)

1

(0)

299

(285)

450GB

175

(170)

18

(19)

11

(10)

1

(0)

197

(190)

600GB

131

(126)

14

(20)

11

(7)

1

(0)

153

(140)

300GB

SAS

269

(255)

20

(19)

15

(15)

1

(0)

299

(285)

450GB

175

(170)

18

(19)

11

(10)

1

(0)

197

(190)

600GB

131

(126)

14

(20)

11

(7)

1

(0)

153

(140)

500GB

SATA

173

(170)

18

(19)

11

(10)

3

(0)

195

(190)

1TB

86

(85)

20

(19)

5

(5)

4

(0)

96

(95)

2TB

43

(36)

13

(20)

4

(2)

1

(0)

51

(40)

64-Bit Aggregate Recommendations for FAS/V6070, 6080, 6240, and 6280

Data ONTAP 8.0.x Maximum Aggregate Capacity 100TB

Capacity

Type

Data Drives

RG Size

Number of RGs

Drive Def.

Data + Parity

100GB

SSD

86

(84)

24

(23)

4

(4)

2

(0)

94

(92)

300GB

FC

385

(384)

13

(18)

35

(24)

0

(0)

455

(432)

450GB

250

(240)

12

(18)

25

(15)

0

(0)

300

(270)

600GB

187

(180)

19

(20)

11

(10)

0

(0)

209

(200)

300GB

SAS

385

(384)

13

(18)

35

(24)

0

(0)

455

(432)

450GB

250

(240)

12

(18)

25

(15)

0

(0)

300

(270)

600GB

187

(180)

19

(20)

11

(10)

0

(0)

209

(200)

500GB

SATA

247

(240)

15

(18)

19

(15)

0

(0)

285

(270)

1TB

123

(119)

20

(19)

7

(7)

3

(0)

137

(133)

2TB

61

(60)

18

(17)

4

(4)

3

(0)

69

(68)

Notes for the preceding tables:

  • 100GB SSD capacity first supported with Data ONTAP 8.0.1

  • 600GB FC/SAS capacity first supported with Data ONTAP 7.3.2 and 8.0 RC3/GA
  • 2TB SATA capacity first supported with Data ONTAP 7.3.2 and 8.0 RC3/GA

Note that the capacity points for 600GB FC/SAS and 2TB SATA are supported in Data ONTAP 7.3.2; however, 64-bit aggregates are supported only in Data ONTAP 8.0 and later.

</64-bit aggregates>

Again, the two copied and pasted sections above are not confidential --- although the docs they are contained within are NDA required (for other information that is contained within).  For aggregates that are not maximum size you can figure this out based on using the sizing policy.  If you have resiliency concerns, that is factored into the recommendations --- more can be read (publically) at http://www.netapp.com/us/library/technical-reports/tr-3437.html.  TR3437 is the Storage Subsystem Resiliency Guide (updated a couple weeks ago) and has information that will help explain some of the background here.

And lastly, for 32-bit aggregates:

<32-bit aggregates>

HOW MANY DRIVES CAN BE USED IN A MAXIMUM SIZE 32-BIT AGGREGATE?

In Data ONTAP 7.2.x and earlier, parity drives and physical drive size are included in the 16TB limit for 32-bit aggregates.

In Data ONTAP 7.3.x and 8.0.x, only data drives and usable drive capacity are included in the 16TB limit for 32-bit aggregates. The following recommendations are based on attempting to provide the most optimal RAID group layout, as explained in the answer to the question “Should all aggregates always use the default RAID group size?” earlier.

The column descriptions for the following tables are as follows:

  • “Data Drives” is the number of data drives that fit within the maximum aggregate capacity (based on usable drive capacity).
  • “RG Size” is the recommended RAID group size to use for the configuration.
  • “Number of RGs” is the resulting number of RAID groups the aggregate will contain.
  • “Drive Def.” is the number of drives deficient the configuration is from a fully even number of RAID groups.
  • “Data + Parity” is the total number of drives used for the aggregate configuration.

The top entries in the following table for the SSD drive show the recommendations for aggregate configurations that are using the maximum number of data drives. In many cases it is better to reduce data drives by a small number in order to achieve a better RAID group layout, as indicated by the bottom numbers, in parentheses.

32-Bit Aggregate Recommendations for All Platforms

Data ONTAP 7.3.x and 8.0.x

Capacity

Type

Data Drives

RG Size

Number of RGs

Drive Def.

Data + Parity

100GB

SSD

86

(84)

24

(23)

4

(4)

2

(0)

94

(92)

300GB

FC

61

(60)

18

(17)

4

(4)

3

(0)

69

(68)

450GB

40

(40)

12

(12)

4

(4)

0

(0)

48

(48)

600GB

29

(28)

17

(16)

2

(2)

1

(0)

33

(32)

300GB

SAS

61

(60)

18

(17)

4

(4)

3

(0)

69

(68)

450GB

40

(40)

12

(12)

4

(4)

0

(0)

48

(48)

600GB

29

(28)

17

(16)

2

(2)

1

(0)

33

(32)

500GB

SATA

39

(39)

15

(15)

3

(3)

0

(0)

45

(45)

1TB

19

(18)

12

(20)

2

(1)

1

(0)

23

(20)

2TB

9

(9)

11

(11)

1

(1)

0

(0)

11

(11)

The preceding table applies to all platforms with one exception, in regard to the FAS2020 platform. In Data ONTAP 7.3.1 and later, the maximum aggregate capacity for the FAS2020 is 16TB, just as with other platforms. In Data ONTAP 7.3, the actual aggregate size limit for the FAS2020 is 6TB, counting only data drives. Finally, in Data ONTAP 7.2 and earlier, the limit for the FAS2020 is 7TB, counting both data and parity drives. NetApp highly recommends that all FAS2020 systems use Data ONTAP 7.3.1 or later in order to avoid confusion when managing your storage configuration.

Notes for the preceding table:

  • 100GB SSD capacity first supported with Data ONTAP 8.0.1
  • 450GB FC/SAS capacity first supported with Data ONTAP 7.2.5.1
  • 600GB FC/SAS capacity first supported with Data ONTAP 7.3.2 and 8.0 RC3/GA
  • 1TB SATA capacity first supported with Data ONTAP 7.2.3
  • 2TB SATA capacity first supported with Data ONTAP 7.3.2 and 8.0 RC3/GA

</32-bit aggregates>

Note that when figuring out how many drives you can use for your aggregates you need to reduce the total drive count to the number of drives that are available for the aggregate (minus dedicated root aggregate drives and hot spares).

Hopefully this helps.

Regards,

Jay White

Technical Marketing Engineer

Storage, RAID, and System Resiliency

SMNKRISTAPS

Hello,

Is there updated table for year 2013?

mheimberg

year 3013? Your looking long forward ...

If you are a partner employee you may donwload the latest "Technical FAQ: Storage Subsystem" from the fieldportal.netapp.com with all the new drives and sizes.

Markus

arthursc0

Ok so how would I chop up this;

Filer only has 500GB sata drive in it.

I have 50 Spares

As far as I can see i would create an aggr of 39 disk (max for sata)

aggr create aggrname 39

how would or what command would I use to specify the rg size or should I let ontap define this?

regards

Colin.

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public