I have a AFF300 with Ontap 9.3P5, it was delivered with a full shelf of 3.8TB SSD's. The system has root-data-data partitioning. After 6 months as our VMware storage we need extra headroom. So I have aquired half a shelf of 3.8TB.
But I can't seem to find a way to get them partitioned in the same manor. I assigned them to node1 via :
cluster1::> storage disk assign -disk 1.2.* -owner node1
I have tried to use:
I see that under advanced mode i get the option data1, when using tab completion
cluster1::> storage disk assign -data1 true -all true -node node1 -owner node1
I'm opening a case with NetApp Support, a bit reserved towards doing this in my prod system without knowing full well all consequences.
I'm not sure if there is the ability to partition a half shelf. You might have to open a case.
but partitions need to be viewed in adv or diag mode
Root-data-data partitioning supported with up to 48 drives. Therefore, you should be able to do it.
Assign your physical disks to controllers and then add them to your aggregates without trying to partition them first. The system should partition them for you automatically, while you add them to an existing aggregate.
Use -simulate true argument first, to check the system going to do it properly and as you expect.
Just to make sure:
NetApp recommends having all the RAID groups in your aggregate with the same disk size except for the least one. In the last RAID group should be at least half of the size of the previous RAID groups. I.e. if you have one aggregate consisting out one RAID group (23 drives: 21 data + 2 parity) then you should add to next RAID group at least 12 SSD to make your last RAID group no less than half of previous.
And it looks like you should meet those requirements. Just to clarify, do not add just 3 drives to the new RAID group with existing 23.
As I can't seem to get my system to use root-data-data partitioning (ADPv2 Iguess?)
Here is what I propose to do. I'm trying to get a validation from NetApp on this.
clus1::> storage aggregate add-disks -aggregate aggr1 -raidgroup new -disklist 6.1.23,6.2.0,6.2.1,6.2.2,6.2.3,6.2.4,6.2.5,6.2.6,6.2.7,6.2.8,6.2.9,6.2.10 -simulate true
Disks would be added to aggregate "aggr1" on node "clus1" in the following manner:
RAID Group rg1, 12 disks (block checksum, raid_dp)
Position Disk Type Size
---------- ------------------------- ---------- ---------------
shared 6.1.23 SSD -
shared 6.2.0 SSD -
shared 6.2.1 SSD 1.72TB
shared 6.2.2 SSD 1.72TB
shared 6.2.3 SSD 1.72TB
shared 6.2.4 SSD 1.72TB
shared 6.2.5 SSD 1.72TB
shared 6.2.6 SSD 1.72TB
shared 6.2.7 SSD 1.72TB
shared 6.2.8 SSD 1.72TB
shared 6.2.9 SSD 1.72TB
shared 6.2.10 SSD 1.72TB
Aggregate capacity available for volume use would be increased by 15.48TB.
The following disks would be partitioned: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10.
I have used the solution described twice. It works without problems. Keep In mind that the limit is 48 drives. You can use the existing spare drive to partition the new drives as I did in my example.
Best of luck to you.