AFF
AFF
I have an AFF8080 originally with a shelf and a half. 18x3.8T SSD's assigned to each node. Currently have a partitioned 16-disk root_aggr and 17-disk data_aggr leaving 1 spare on each node.
I added a full 24 disk shelf and assigned 12 disks to node 1 and 12 disks to node 2.
Should I add the disks into the same partitioned raid group on each node or create a new non-partitioned raid group with 11 disks/1 spare in the same aggregate? I just want to expand data_aggr01 and data_aggr02 and not have to create a new aggregate on each node.
disk | owner | container-type | container name | type | |
2.10.12 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.13 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.14 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.15 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.16 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.17 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.18 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.19 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.20 | cluster1-01 | shared | - | SSD | |
2.10.21 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
2.10.22 | cluster1-01 | shared | data_aggr01 | SSD | |
2.10.23 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.0.6 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.0.7 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.0.8 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.0.9 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.0.10 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.0.11 | cluster1-01 | shared | data_aggr01, | root_aggr01 | SSD |
5.1.0 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.1 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.2 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.3 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.4 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.5 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.6 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.7 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.8 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.9 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.10 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
5.1.11 | cluster1-01 | spare | Pool0 | Pool0 | SSD |
2.10.0 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.1 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.2 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.3 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.4 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.5 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.6 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.7 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.8 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.9 | cluster1-02 | shared | data_aggr02 | SSD | |
2.10.10 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
2.10.11 | cluster1-02 | shared | - | SSD | |
5.0.0 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
5.0.1 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
5.0.2 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
5.0.3 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
5.0.4 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
5.0.5 | cluster1-02 | shared | data_aggr02, | root_aggr02 | SSD |
5.1.12 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.13 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.14 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.15 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.16 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.17 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.18 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.19 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.20 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.21 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.22 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
5.1.23 | cluster1-02 | spare | Pool0 | Pool0 | SSD |
Solved! See The Solution
Yes, that is my suggestion, balancing best practice, risk and capacity.
It's a tough call - we never recommend to mix disk sizes inside a RAID group, but.. it's possible, and I also know how much a 3.8TB SSD costs, so I could see why you would want to.
See these two docs for how to do it - basically you need to replace the parity drive partitions with full drives, and then add the full drives to the RAID group, and add the zero'ed parity drives back in for data. My take is that if you need to get instructions from a BURT on how to assign disks, it's not a good idea for production.
Our recommendation would be to add a new RAID group for the unpartitioned 3.8TB drives, with associated double parity disks, to the existing aggregate. That would give about 77TB per node vs 83TB in a single RAID group.
Remembering the heirarchy - disks are grouped into raid groups, raid groups into aggregates, and volumes and LUNs live in aggregate.
The initially added SSDs are partitioned, with a small partition used for a boot aggregate, and the remainder for a data aggregate, while the newly added SSDs are not partitioned - so while they are the same size SSDs, the capacity used for the data aggregate is different.
Your initial question, if I understand correctly, was can you add the 3.84TB SSDs into a RAID group with the 3.78TB partitions. The answer is yes - but you shouldn't, and unless you do things that are not recommended, they will not have their full capacity used, but the amount of capacity would not be significant.
As I understand it, OnCommand System Manager is presenting the option to add the 3.84TB unpartitioned SSDs into the aggregate as another RAID group, with its own parity disks - this is fine, and what we would recommend.
Yes, that is my suggestion, balancing best practice, risk and capacity.
What if you do care about the capacity difference though (for example, you are using 15.3TB SSDs)?
I have an A700s running ONTAP 9.2P2 with 24 internal 15.3TB SDDs. Each node has a data aggr with 23 partitioned drives. I just added a new shelf with 24x 15.3TB drives, but the spares show up as unpartitioned. If I use the unpartitioned spares, each new data aggr grows to 244TB. However, if I used the same spares as partitioned drives, the data aggr grows to 269TB (based on Synergy). That's a 25TB difference per node.
So my question is, how do I turn the unpartitioned spares into partitioned spares? I would have thought this was an easy change, but I can't figure out how to do it.