As was previously pointed out, you aren't losing any capacity in this setup. The only time you lose disk capacity from multiple sizes is when multiple sizes are forced together in a raidgroup. This can happen on creation (generally accidentally) or when a larger size disk is pressed into service as a spare replacement for a compatible smaller size disk type: i.e. 900GB SAS disk fails, no spares available but a 1800GB SAS spare is, the 1800GB disk will be treated as a 900GB and takes over going forward. The only way to get capacity back in these situations is to manually fail out the "fake capacity" disk with a disk of the real desired capacity, then the larger disk disk to make it back into a full size spare. So long as you keep adequate spares of the size needed, you're good on a capacity front.
The key point in mixing disk sizes in an aggregate on raid boundaries is that performance measurement is unpredictable. Consider this metric - IOPs per GB. As a basic example and using round numbers (rather than right sized just to make the math simpler), assume 900GB and 1800GB drives with 15 data drives per RAID group, and 200 IOPs available per drive (about right for 10K drives).
Raidgroups that are made up of 900GB drives have 15 * 900 = 13500GB capacity and 3000 IOPs capability, or 0.22 IOPs per GB. As you might expect, raidgroups that are made up of 1800GB drives have 27000GB capacity and yet still the same 3000 IOPs capability, or only 0.11 IOPs per GB.
So in a mixed size aggregate, as data is spread across all the disks, you'll see some data accessed at 0.22 IOPs per GB processed and some at 0.11 IOPs per GB processed. Looks like you have roughly 11 aggregates split roughly 50/50. Since the larger disks are twice the size of the smaller, you have a roughly 1/3 - 2/3 split in capacity. Over time, 1/3 of your data will run at 0.22 IOPs per GB, and 2/3 will run at 0.11 IOPs per GB. So the addition of the larger disk slows down the entire aggregate on average.
So - questions: does this matter? That's up to you to answer. The addition of the 1800s to the same aggregate slows down the aggregate as a whole. Since data is written anywhere in the aggregate (WAFL), eventually even data today wholly residing on the 900GB disks will make it's way to the 1800s and potentially slow down. But, is your IO load heavy enough to make this an important consideration? Or, is it important to have a single management point (the aggregate) rather than say two separate aggregates?
The other way - two separate aggregates with just one disk type. Has the same issue with performance when measured "as a whole" across both aggregates - average IOPs capability of the system relative to the capacity of the system will go down. But, with two aggregates, you have fine tuning control available based on where you put specific volumes - the aggregate that can process data faster per GB or the one that is slower per GB. That may be what you want. Then again if you just need a lot of capacity in a single chunk - well that's what you need and a single aggregate gets it.
Also - it appears that the 1800GB disks are created in raid groups that are a different size than are the 900GB disks, if I've counted the disks carefully. That design also has the same performance pitfalls, though to a much lesser scale, as does multiple sizes in an aggregate.
To sum up - what you've created is an aggregate that will perform unpredictably over time depending on where any data is written. It willl generally slow down as a whole over time as capacity utilization increases and/or spreads over all the disks when measured against the total data throughput.
Unless there is a really good and well thought out reason to do it this way, I add vote to not mixing sizes as a best practice.
Hope this helps.
Bob Greenwald
Lead Storage Engineer | Consilio, LLC
NCIE SAN Clustered, Data Protection
Kudos and accepted solutions are always appreciated.