Ask The Experts

What is half-populated shelf best practice for A220

trds2d2
2,031 Views

Deployed a half-populated A220.  When creating the aggregates, 6 of the disks were assigned to node1 and the other 6 to node2.   Config Advisor gives me all green.  However, I've read in AFF best practice documentation for aggregates that 0-11 should be assigned to first node and 12-23 to second node.  My thought is that my A220 should just be one aggregate with all disks assigned to node1.   Am I going to achieve best performance with how it is currently configured or would it be best to just create one aggregate with the 12 disks with ownership assigned to the first node?

1 ACCEPTED SOLUTION

AlexDawson
2,001 Views

Hi There!

 

Performance is scaled by storage and by CPU. Volume write is a single thread per volume.

 

By having all the SSDs on one node, you will be "limited" by the CPU of that node.

 

But an AFF A220 has a 12 core CPU in each node.

 

So instead if you split it, you get more parallelism, as workloads go across both CPUs.

 

But then you're limited to 6 SSDs.. but then 6 SSDs are still very fast.

 

So really it comes down to workload.  If it is something that will parallelise well - maybe splitting it would be helpful, but if it's a single monolithic workload, or a small number of workloads, more disks behind it might be better.

 

Slot assignment matters for automatic assignment of spare disks on whole disk systems, but with root-data-data partitioned systems like the AFF A220, it's less important. You are always welcome to turn the system off and move disks around physically if you'd prefer them in different slots. There are aggregate IDs on each disk/partition which are read at boot time to assemble the disk layouts.

 

Short answer - it's all good. 

 

Hope this helps, if not technically, with your comfort factor for system as deployed 🙂

View solution in original post

1 REPLY 1

AlexDawson
2,002 Views

Hi There!

 

Performance is scaled by storage and by CPU. Volume write is a single thread per volume.

 

By having all the SSDs on one node, you will be "limited" by the CPU of that node.

 

But an AFF A220 has a 12 core CPU in each node.

 

So instead if you split it, you get more parallelism, as workloads go across both CPUs.

 

But then you're limited to 6 SSDs.. but then 6 SSDs are still very fast.

 

So really it comes down to workload.  If it is something that will parallelise well - maybe splitting it would be helpful, but if it's a single monolithic workload, or a small number of workloads, more disks behind it might be better.

 

Slot assignment matters for automatic assignment of spare disks on whole disk systems, but with root-data-data partitioned systems like the AFF A220, it's less important. You are always welcome to turn the system off and move disks around physically if you'd prefer them in different slots. There are aggregate IDs on each disk/partition which are read at boot time to assemble the disk layouts.

 

Short answer - it's all good. 

 

Hope this helps, if not technically, with your comfort factor for system as deployed 🙂

Public