ONTAP Discussions

RAID groups with mixed disk sizes and lopsided RAID groups

jtinouye
5,205 Views

All,

 

I currently have an 8040 with two shelves of 1.2T SAS disks that are split between the two nodes (cdot install).  Current RG size is 20 for these disks.  I have a loop which contains 2x900GB and 5x600GB.  I'd like to potentially split these up between the two existing aggregates.  Maybe with the 900's in one aggr, and the 600's in the other -- also transitioning to a RG size of 23.  All disk speeds are the same.

 

Per the storage subsystem faq i would be out of the "recommended" zone of +/- 2 disks in RG size, but I have also seen KBs point out that as long as it isn't less than half of the current RG size.

 

Probably won't make much of a difference here, but I wanted to reach out to the community to see their thoughts on this config.

 

Thanks!

8 REPLIES 8

asulliva
5,190 Views

Hello @jtinouye,

 

When you say "2x900GB and 5x600GB" are you referring to number of disks or number of shelves?

 

It's definitely not recommended, and I don't believe it's possible, to create aggreagtes with disks of different sizes.  There are some instances where a larger disk will be used to replace a smaller failed disk in an aggregate, but this is not a good thing as you are effectively loosing the additional capacity.  

 

 

Andrew

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

jtinouye
5,186 Views

Hey Andrew,

 

Thank you for the reply.  It is the number of shelves.  The mix disk configuration is certainly possible, although my real question surrounds the performance.  We will keep enough spares in play to make sure that we don't have any right-sizing of drives.  

 

The storage sub system faq has a recommendedation of homogenous disk, but states that mixed disk RGs (as long as same speed) are likely OK.

 

http://www.netapp.com/us/media/tr-3838.pdf

asulliva
5,179 Views

Interesting, mixed aggregates is honestly something I've never implemented.  It was impressed upon me VERY strongly when I was a customer not to do it, and I've never followed up...I suppose that's my fault.

 

That being said, 3838 does still have a strong recommendation against mixing aggrs, do you mind if I ask why you're mixing?

 

First, let me caveat by saying I'm not a RAID guy...with that out of the way, since you'll be creating new RAID groups I don't believe the "+/- 2 disks" rule applies. That would normally apply if you have new (or existing) RAID groups that are only partially filled.  Presumably you would be adding 2 new RAID groups (for the 900s) and 5 for the 600s creating new, full, RAID groups in addition to the the existing (full?) RAID groups for the 1.2T drives.

 

Andrew

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

jtinouye
5,172 Views

I'm with you there on the mix disk configurations.  I knew that people did them, and that they were possible... When I was a professional services guy I was giving out the same recommendations and it was never really questioned.

 

For us, it comes down to managing fewer pools and having larger backing aggregates for back end performance.

 

you are right on the +/- RG recommendation guidelines...  There was a KB that I read a couple days of go that stated that we could have smaller raid groups but they should not be any smaller than the current RG size for the aggregate.

isaacs
5,075 Views

Yep, totally possible, but keep the smaller drives in their own RG.  As long as the capacity of the RGs are close, you should not run into any problems.  The only concern is that a RG that is smaller will fill up faster, possibly reducing write IOPs (since they are full, they won't be used when cache is destaged.)  so using a larger RG size for the smaller drives would be advised.  The guidance is also, as alluded to above, to add at least half the capacity of a single existing RG  (if current RG size is 16, new RG shoudl be at least 8), to minimize the impact of impact of having so much free space on so few spindles.

You should also plan on doing a WAFL reallocate to redistrubute the data more evenly across the drives.  (note: don't do this with SSDs, just HDDs)

 

- Dan

jtinouye
5,184 Views

SIDE NOTE:  BTW, miss the NetApp communities podcast you guys had going.  Any plans for future podcasts with same crew?

asulliva
5,174 Views

Thanks, happy to hear you enjoyed the Communities Podcast!  We did start again, the Tech ONTAP Podcast, and we're 21 episodes in (as of tomorrow morning) now.  Feel free to send me email (community username @netapp.com, or podcast@netapp.com, with thoughts, suggestions, requests, rants, etc.!

 

Andrew

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

paulstringfellow
5,114 Views

Looks like between yourself and Sully Jeff, you have this nailed.

 

can't debate any of these things - everything suggested technically works, i think some of it comes down to practicallity.

 

for me, i like to keep it simple, i have enough things that are hard work without creating more of them!

 

keeping aggregates with same disk sizes and using multiple aggregates makes lots of logical sense for this setup, you're not getting penalised for lots of spare disks because your working along the lines of raid group sizes been shelf sizes.

 

so for me, keep it simple!

 

glad you're a fan of the podcast - think it's a great resource for tech folk, NetApp customers but also for non NetApp customers if you listen to some of the recent ones around strategic design for example, NetApp based, but easily adaptable.

 

great resource - here's a link to the return notice, but you'll find links to the soundcloud and itunes subscriptions for it.

 

and as you catch up... if you listen to Insight Berlin Day 4 - you'll get me podcast debut 🙂 (blatant self promotion there!)

 

http://community.netapp.com/t5/Tech-OnTap-Articles/NetApp-Podcast-Returns-with-More-Tech-More-Information-and-a-New-Voice/ta-p/109596

 

hope all this has helped.

Public