ONTAP Hardware

Mixed disk size aggregates

tyrone_owen_1
18,602 Views

Hi

 

FAS8080, ONTAP 8.3.1P1

 

NetApp have expanded an aggregate which was previously solely 900GB disks with 1800GB disks, resulting in an aggregate with mixed disk sizes. This approach is alien to me and even though a new feature in ONTAP allow mixed disks in an aggregate to maintain their useable capacity I was still under the impression that mixed disk aggreates were to be avoided and best practice was an aggregate with a single disk-type (excluding hybrid aggs).

 

What are the pros and cons of this setup?

 

The advantage I suppose is the spindle count, however to my mind this will only be effective up to the point where the smaller disks become full upon which the RAID groups which contain the larger disks will start getting hot. Unless there is some sort of automatic/manual re-balancing (reallocate). Part of the issue I have is that how the mixed disk aggregates function doesn't seem to be documented, or at least I can't find it.

 

I'm thinking of backing out this config by splitting out the aggregate to two as I don't know what the consequences will be down the road.

 

Thanks

1 ACCEPTED SOLUTION

bobshouseofcards
18,494 Views

As was previously pointed out, you aren't losing any capacity in this setup.  The only time you lose disk capacity from multiple sizes is when multiple sizes are forced together in a raidgroup.  This can happen on creation (generally accidentally) or when a larger size disk is pressed into service as a spare replacement for a compatible smaller size disk type: i.e. 900GB SAS disk fails, no spares available but a 1800GB SAS spare is, the 1800GB disk will be treated as a 900GB and takes over going forward.  The only way to get capacity back in these situations is to manually fail out the "fake capacity" disk with a disk of the real desired capacity, then the larger disk disk to make it back into a full size spare.  So long as you keep adequate spares of the size needed, you're good on a capacity front.

 

The key point in mixing disk sizes in an aggregate on raid boundaries is that performance measurement is unpredictable.  Consider this metric - IOPs per GB.  As a basic example and using round numbers (rather than right sized just to make the math simpler), assume 900GB and 1800GB drives with 15 data drives per RAID group, and 200 IOPs available per drive (about right for 10K drives).

 

Raidgroups that are made up of 900GB drives have 15 * 900 = 13500GB capacity and 3000 IOPs capability, or 0.22 IOPs per GB.  As you might expect, raidgroups that are made up of 1800GB drives have 27000GB capacity and yet still the same 3000 IOPs capability, or only 0.11 IOPs per GB.

 

So in a mixed size aggregate, as data is spread across all the disks, you'll see some data accessed at 0.22 IOPs per GB processed and some at 0.11 IOPs per GB processed.  Looks like you have roughly 11 aggregates split roughly 50/50.  Since the larger disks are twice the size of the smaller, you have a roughly 1/3 - 2/3 split in capacity.  Over time, 1/3 of your data will run at 0.22 IOPs per GB, and 2/3 will run at 0.11 IOPs per GB.  So the addition of the larger disk slows down the entire aggregate on average.

 

So - questions:  does this matter?  That's up to you to answer.  The addition of the 1800s to the same aggregate slows down the aggregate as a whole.  Since data is written anywhere in the aggregate (WAFL), eventually even data today wholly residing on the 900GB disks will make it's way to the 1800s and potentially slow down.  But, is your IO load heavy enough to make this an important consideration?  Or, is it important to have a single management point (the aggregate) rather than say two separate aggregates?

 

The other way - two separate aggregates with just one disk type.  Has the same issue with performance when measured "as a whole" across both aggregates - average IOPs capability of the system relative to the capacity of the system will go down.  But, with two aggregates, you have fine tuning control available based on where you put specific volumes - the aggregate that can process data faster per GB or the one that is slower per GB.  That may be what you want.  Then again if you just need a lot of capacity in a single chunk - well that's what you need and a single aggregate gets it.

 

Also - it appears that the 1800GB disks are created in raid groups that are a different size than are the 900GB disks, if I've counted the disks carefully.  That design also has the same performance pitfalls, though to a much lesser scale, as does multiple sizes in an aggregate.

 

To sum up - what you've created is an aggregate that will perform unpredictably over time depending on where any data is written.  It willl generally slow down as a whole over time as capacity utilization increases and/or spreads over all the disks when measured against the total data throughput.   

 

Unless there is a really good and well thought out reason to do it this way, I add vote to not mixing sizes as a best practice.

 

 

Hope this helps.

 

Bob Greenwald

Lead Storage Engineer | Consilio, LLC

NCIE SAN Clustered, Data Protection

 

 

Kudos and accepted solutions are always appreciated.

View solution in original post

10 REPLIES 10

aborzenkov
18,589 Views

new feature in ONTAP allow mixed disks in an aggregate

This was possible for as long as I remember (which is 15+ years). So there is nothing new. The obvious drawback of this approach is that large disks are utilized more than small so some data is striped across less number of disks. You cannot predict what data and it is impossible to give blanket statement about impact. You probably should avoid it for high load OLTP application; OTOH for simple file server my guess would be - nobody will notice.

tyrone_owen_1
18,577 Views

I didn't realise that historically you could maintain the usebale size of larger disks within mixed aggregates, I always thought they were right-sized to the smaller capacity disks. I never really looked into it until now as it has never been an issue for me.

 

Is there anything documented about the behaviour?

nasmanrox
18,560 Views

I agree with Tyron.  Unless it's hybrid aggr, I do not believe it's recommended to mix the size.  Your large size disk will only function as the smaller size 

aborzenkov
18,558 Views

I do not believe it's recommended to mix the size.

That's true.

 


Your large size disk will only function as the smaller size 

And that's wrong, sorry. This applies to a single raid group, not to the whole aggregate. You can have multiple raid groups with different disk sizes; in each raid group whole disks will be used.

tyrone_owen_1
18,552 Views

Here's the details of one of the aggregates in question, as you can see two different disk sizes displaying their usebale sizes, not right-sized to the smallest disk. It doesn't really matter if this is a new feature or not, I'm still not comfortable with it given the best practice. I was really after pros and cons, or anyone with experiense of using in this configuration. I'm tempted just to back it out as I have the space to be able to move things around at the moment.

 

storage show disk -aggregate xx_aggr_sas_01 -fields aggregate, raid-group,usable-size,physical-size
disk    aggregate      physical-size raid-group usable-size
------- -------------- ------------- ---------- -----------
1.10.1  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.2  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.3  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.4  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.5  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.6  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.7  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.8  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.9  xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.10 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.11 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.12 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.13 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.14 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.15 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.16 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.17 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.18 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.19 xx_aggr_sas_01 838.4GB       rg0        836.9GB
1.10.20 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.10.21 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.10.22 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.10.23 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.1  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.2  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.3  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.4  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.5  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.6  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.7  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.8  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.9  xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.10 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.11 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.12 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.13 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.14 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.15 xx_aggr_sas_01 838.4GB       rg1        836.9GB
1.11.16 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.11.17 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.11.18 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.11.19 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.11.20 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.11.21 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.11.23 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.1  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.2  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.3  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.4  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.5  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.6  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.7  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.8  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.9  xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.10 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.11 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.12 xx_aggr_sas_01 838.4GB       rg2        836.9GB
1.12.13 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.14 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.15 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.16 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.17 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.18 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.19 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.20 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.21 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.22 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.12.23 xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.0  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.1  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.2  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.3  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.4  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.5  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.6  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.7  xx_aggr_sas_01 838.4GB       rg3        836.9GB
1.13.8  xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.9  xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.10 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.11 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.12 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.13 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.14 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.15 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.16 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.17 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.18 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.19 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.20 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.21 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.22 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.13.23 xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.14.1  xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.14.2  xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.14.3  xx_aggr_sas_01 838.4GB       rg4        836.9GB
1.14.6  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.14.7  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.14.8  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.14.9  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.14.10 xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.14.11 xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.14.12 xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.14.13 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.14.14 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.14.15 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.14.16 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.14.17 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.14.18 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.14.19 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.14.20 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.14.21 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.14.22 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.14.23 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.15.0  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.15.1  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.15.2  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.15.3  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.15.4  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.15.5  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.15.6  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.15.7  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.15.8  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.15.9  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.15.10 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.15.11 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.15.12 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.15.13 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.15.14 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.15.15 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.15.16 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.15.17 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.15.18 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.15.19 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.15.20 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.15.21 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.15.22 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.15.23 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.16.0  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.16.1  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.16.2  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.16.3  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.16.4  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.16.5  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.16.6  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.16.7  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.16.8  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.16.9  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.16.10 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.16.11 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.16.12 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.16.13 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.16.14 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.16.15 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.16.16 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.16.17 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.16.18 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.16.19 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.16.20 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.16.21 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.16.22 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.17.0  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.17.1  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.17.2  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.17.3  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.17.4  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.17.5  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.17.6  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.17.7  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.17.8  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.17.9  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.17.10 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.17.11 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.17.12 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.17.13 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.17.14 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.17.15 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.17.16 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.17.17 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.17.18 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.17.19 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.17.20 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.17.21 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.17.22 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.17.23 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.18.0  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.18.1  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.18.2  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.18.3  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.18.4  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.18.5  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.18.6  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.18.7  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.18.8  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.18.9  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.18.10 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.18.11 xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.18.12 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.18.13 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.18.14 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.18.15 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.18.16 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.18.17 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.18.18 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.18.19 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.18.20 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.18.21 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.18.22 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.18.23 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.19.0  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.19.1  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.19.2  xx_aggr_sas_01 1.64TB        rg5        1.63TB
1.19.3  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.19.4  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.19.5  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.19.6  xx_aggr_sas_01 1.64TB        rg6        1.63TB
1.19.7  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.19.8  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.19.9  xx_aggr_sas_01 1.64TB        rg7        1.63TB
1.19.10 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.19.11 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.19.12 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.19.13 xx_aggr_sas_01 1.64TB        rg8        1.63TB
1.19.14 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.19.15 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.19.16 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.19.17 xx_aggr_sas_01 1.64TB        rg9        1.63TB
1.19.18 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.19.19 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.19.20 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.19.21 xx_aggr_sas_01 1.64TB        rg10       1.63TB
1.19.22 xx_aggr_sas_01 1.64TB        rg10       1.63TB
231 entries were displayed.

bobshouseofcards
18,495 Views

As was previously pointed out, you aren't losing any capacity in this setup.  The only time you lose disk capacity from multiple sizes is when multiple sizes are forced together in a raidgroup.  This can happen on creation (generally accidentally) or when a larger size disk is pressed into service as a spare replacement for a compatible smaller size disk type: i.e. 900GB SAS disk fails, no spares available but a 1800GB SAS spare is, the 1800GB disk will be treated as a 900GB and takes over going forward.  The only way to get capacity back in these situations is to manually fail out the "fake capacity" disk with a disk of the real desired capacity, then the larger disk disk to make it back into a full size spare.  So long as you keep adequate spares of the size needed, you're good on a capacity front.

 

The key point in mixing disk sizes in an aggregate on raid boundaries is that performance measurement is unpredictable.  Consider this metric - IOPs per GB.  As a basic example and using round numbers (rather than right sized just to make the math simpler), assume 900GB and 1800GB drives with 15 data drives per RAID group, and 200 IOPs available per drive (about right for 10K drives).

 

Raidgroups that are made up of 900GB drives have 15 * 900 = 13500GB capacity and 3000 IOPs capability, or 0.22 IOPs per GB.  As you might expect, raidgroups that are made up of 1800GB drives have 27000GB capacity and yet still the same 3000 IOPs capability, or only 0.11 IOPs per GB.

 

So in a mixed size aggregate, as data is spread across all the disks, you'll see some data accessed at 0.22 IOPs per GB processed and some at 0.11 IOPs per GB processed.  Looks like you have roughly 11 aggregates split roughly 50/50.  Since the larger disks are twice the size of the smaller, you have a roughly 1/3 - 2/3 split in capacity.  Over time, 1/3 of your data will run at 0.22 IOPs per GB, and 2/3 will run at 0.11 IOPs per GB.  So the addition of the larger disk slows down the entire aggregate on average.

 

So - questions:  does this matter?  That's up to you to answer.  The addition of the 1800s to the same aggregate slows down the aggregate as a whole.  Since data is written anywhere in the aggregate (WAFL), eventually even data today wholly residing on the 900GB disks will make it's way to the 1800s and potentially slow down.  But, is your IO load heavy enough to make this an important consideration?  Or, is it important to have a single management point (the aggregate) rather than say two separate aggregates?

 

The other way - two separate aggregates with just one disk type.  Has the same issue with performance when measured "as a whole" across both aggregates - average IOPs capability of the system relative to the capacity of the system will go down.  But, with two aggregates, you have fine tuning control available based on where you put specific volumes - the aggregate that can process data faster per GB or the one that is slower per GB.  That may be what you want.  Then again if you just need a lot of capacity in a single chunk - well that's what you need and a single aggregate gets it.

 

Also - it appears that the 1800GB disks are created in raid groups that are a different size than are the 900GB disks, if I've counted the disks carefully.  That design also has the same performance pitfalls, though to a much lesser scale, as does multiple sizes in an aggregate.

 

To sum up - what you've created is an aggregate that will perform unpredictably over time depending on where any data is written.  It willl generally slow down as a whole over time as capacity utilization increases and/or spreads over all the disks when measured against the total data throughput.   

 

Unless there is a really good and well thought out reason to do it this way, I add vote to not mixing sizes as a best practice.

 

 

Hope this helps.

 

Bob Greenwald

Lead Storage Engineer | Consilio, LLC

NCIE SAN Clustered, Data Protection

 

 

Kudos and accepted solutions are always appreciated.

tyrone_owen_1
18,482 Views

Bob, thank you for your reply.

 

You've articulated and confirmed the gut feelling I had about mixed disk aggregates in this context. I'm going to back-out the configuration.

 

Thanks again

Doston
12,437 Views

What if it's the opposite though?  What if you've already an aggregate with 2 raid groups - all 1.8T drives and you'd like to add another rg with all 900gb drives?  Would that impact performance at all?

AlexDawson
12,377 Views

Hi there! bit of an old thread, so you might not get too many responses here.

 

As I have heard it explained, ONTAP will schedule all writes towards the size of the smallest disks - larger capacity drives have better performance usually - meaning if it uses x% of time to write data, and it knows it can write "X" gb in a write cycle, meaning the amount of data it can write in any operation across all of the disks is reduced due to the smaller disks, meaning it can be slower to clear nvram checkpoints, which can lead to worse performance, sometimes significantly so.

 

I've mixed 8TB and 10TB drives in the same aggregate before, and it's fine.. but it was a massive aggregate and a sparse workload. I don't think I'd suggest mixing 900GB and 1800GB drives together.

Madhu1
10,557 Views

Awesome explanation Bob, how are you doing ?

Public