AFF

AFF8080 - Shelf and a half, then Adding a shelf

J-L-B
13,631 Views

I have an AFF8080 originally with a shelf and a half.  18x3.8T SSD's assigned to each node.  Currently have a partitioned 16-disk root_aggr and 17-disk data_aggr leaving 1 spare on each node.

 

I added a full 24 disk shelf and assigned 12 disks to node 1 and 12 disks to node 2. 

 

Should I add the disks into the same partitioned raid group on each node or create a new non-partitioned raid group with 11 disks/1 spare in the same aggregate?  I just want to expand data_aggr01 and data_aggr02 and not have to create a new aggregate on each node.

 

diskownercontainer-typecontainer name type
2.10.12cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.13cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.14cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.15cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.16cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.17cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.18cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.19cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.20cluster1-01shared- SSD
2.10.21cluster1-01shareddata_aggr01,root_aggr01SSD
2.10.22cluster1-01shareddata_aggr01 SSD
2.10.23cluster1-01shareddata_aggr01,root_aggr01SSD
5.0.6cluster1-01shareddata_aggr01,root_aggr01SSD
5.0.7cluster1-01shareddata_aggr01,root_aggr01SSD
5.0.8cluster1-01shareddata_aggr01,root_aggr01SSD
5.0.9cluster1-01shareddata_aggr01,root_aggr01SSD
5.0.10cluster1-01shareddata_aggr01,root_aggr01SSD
5.0.11cluster1-01shareddata_aggr01,root_aggr01SSD
5.1.0cluster1-01sparePool0Pool0SSD
5.1.1cluster1-01sparePool0Pool0SSD
5.1.2cluster1-01sparePool0Pool0SSD
5.1.3cluster1-01sparePool0Pool0SSD
5.1.4cluster1-01sparePool0Pool0SSD
5.1.5cluster1-01sparePool0Pool0SSD
5.1.6cluster1-01sparePool0Pool0SSD
5.1.7cluster1-01sparePool0Pool0SSD
5.1.8cluster1-01sparePool0Pool0SSD
5.1.9cluster1-01sparePool0Pool0SSD
5.1.10cluster1-01sparePool0Pool0SSD
5.1.11cluster1-01sparePool0Pool0SSD
2.10.0cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.1cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.2cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.3cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.4cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.5cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.6cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.7cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.8cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.9cluster1-02shareddata_aggr02 SSD
2.10.10cluster1-02shareddata_aggr02,root_aggr02SSD
2.10.11cluster1-02shared- SSD
5.0.0cluster1-02shareddata_aggr02,root_aggr02SSD
5.0.1cluster1-02shareddata_aggr02,root_aggr02SSD
5.0.2cluster1-02shareddata_aggr02,root_aggr02SSD
5.0.3cluster1-02shareddata_aggr02,root_aggr02SSD
5.0.4cluster1-02shareddata_aggr02,root_aggr02SSD
5.0.5cluster1-02shareddata_aggr02,root_aggr02SSD
5.1.12cluster1-02sparePool0Pool0SSD
5.1.13cluster1-02sparePool0Pool0SSD
5.1.14cluster1-02sparePool0Pool0SSD
5.1.15cluster1-02sparePool0Pool0SSD
5.1.16cluster1-02sparePool0Pool0SSD
5.1.17cluster1-02sparePool0Pool0SSD
5.1.18cluster1-02sparePool0Pool0SSD
5.1.19cluster1-02sparePool0Pool0SSD
5.1.20cluster1-02sparePool0Pool0SSD
5.1.21cluster1-02sparePool0Pool0SSD
5.1.22cluster1-02sparePool0Pool0SSD
5.1.23cluster1-02sparePool0Pool0SSD
1 ACCEPTED SOLUTION

AlexDawson
13,535 Views

Yes, that is my suggestion, balancing best practice, risk and capacity.

View solution in original post

7 REPLIES 7

AlexDawson
13,595 Views

It's a tough call - we never recommend to mix disk sizes inside a RAID group, but.. it's possible, and I also know how much a 3.8TB SSD costs, so I could see why you would want to.

 

See these two docs for how to do it - basically you need to replace the parity drive partitions with full drives, and then add the full drives to the RAID group, and add the zero'ed parity drives back in for data. My take is that if you need to get instructions from a BURT on how to assign disks, it's not a good idea for production.

 

Our recommendation would be to add a new RAID group for the unpartitioned 3.8TB drives, with associated double parity disks, to the existing aggregate. That would give about 77TB per node vs 83TB in a single RAID group.

J-L-B
13,586 Views
All of my SSDs are 3.8TB. Current and new are all the same so there wouldn't be mixing of drives. They are all same model and firmware also.

After looking at system manager it allows to expand the data aggrs to 28 disks. It also allows me to add 11 disks with a spare as a new nonpartitioned raid group. Not concerned about the capacity difference.

What's best for performance and efficiency? And what are limits of "partitioned" raid group? Sorry AFF and ADP are new tech to me still.

AlexDawson
13,574 Views

Remembering the heirarchy - disks are grouped into raid groups, raid groups into aggregates, and volumes and LUNs live in aggregate.

 

The initially added SSDs are partitioned, with a small partition used for a boot aggregate, and the remainder for a data aggregate, while the newly added SSDs are not partitioned - so while they are the same size SSDs, the capacity used for the data aggregate is different.

 

Your initial question, if I understand correctly, was can you add the 3.84TB SSDs into a RAID group with the 3.78TB partitions. The answer is yes - but you shouldn't, and unless you do things that are not recommended, they will not have their full capacity used, but the amount of capacity would not be significant.

 

As I understand it, OnCommand System Manager is presenting the option to add the 3.84TB unpartitioned SSDs into the aggregate as another RAID group, with its own parity disks - this is fine, and what we would recommend.

J-L-B
13,583 Views
Quick note: running OnTap 8.3.2

J-L-B
13,567 Views
Ok I was thinking that the option to expand to 28 disks in the raidngroup would partition the new disks automatically if they were being added to the same raid group. Sounds like best bet is to just add 11 of the 12 new spares as a non-partitioned raid group. So end result being data_aggr01 with a 17 disk partitioned (shared) raid group and another 11 disk non partitioned raid group. If I buy another shelf later I could add it to the second stack with shelf 10 and then expand that new raid group out to 22. Am I understanding correct?

AlexDawson
13,536 Views

Yes, that is my suggestion, balancing best practice, risk and capacity.

gtedd
7,491 Views

What if you do care about the capacity difference though (for example, you are using 15.3TB SSDs)?

 

I have an A700s running ONTAP 9.2P2 with 24 internal 15.3TB SDDs.  Each node has a data aggr with 23 partitioned drives.  I just added a new shelf with 24x 15.3TB drives, but the spares show up as unpartitioned.  If I use the unpartitioned spares, each new data aggr grows to 244TB.  However, if I used the same spares as partitioned drives, the data aggr grows to 269TB (based on Synergy).  That's a 25TB difference per node. 

 

So my question is, how do I turn the unpartitioned spares into partitioned spares?  I would have thought this was an easy change, but I can't figure out how to do it.     

Public