ONTAP Discussions

Ontap 9.x root-data-data partitioning discussion

Arne
9,019 Views

Hello NTAP_community

 

I have a AFF300 with Ontap 9.3P5, it was delivered with a full shelf of 3.8TB SSD's. The system has root-data-data partitioning. After 6 months as our VMware storage we need extra headroom. So I have aquired half a shelf of 3.8TB.

 

But I can't seem to find a way to get them partitioned in the same manor. I assigned them to node1 via :

cluster1::> storage disk assign -disk 1.2.* -owner node1

 

I have tried to use:

cluster1::> storage disk assign -disk 1.2.* -owner node1 -data1 true
Error: invalid argument "-data1"
 
I can't seem to find where to get this fixed. There is one spare disk which has the correct root-data-data partiotion, can this disk be used to force raid-grp creation?
 
I would like to spread the load between the 2 nodes. If I use the full disks node1 will have approx twice the load of node2.
8 REPLIES 8

JGPSHNTAP
8,936 Views

I'm not sure if there is the ability to partition a half shelf.  You might have to open a case.

 

but partitions need to be viewed in adv or diag mode

 

set d

partition show

Arne
8,922 Views

Thank you!

 

I see that under advanced mode i get the option data1, when using tab completion

 

cluster1::> storage disk assign -data1 true -all true -node node1 -owner node1

 

I'm opening a case with NetApp Support, a bit reserved towards doing this in my prod system without knowing full well all consequences.

 

\Arne


@JGPSHNTAP wrote:

I'm not sure if there is the ability to partition a half shelf.  You might have to open a case.

 

but partitions need to be viewed in adv or diag mode

 

set d

partition show


 

Damien_Queen
8,819 Views

Root-data-data partitioning supported with up to 48 drives. Therefore, you should be able to do it.

 

Assign your physical disks to controllers and then add them to your aggregates without trying to partition them first. The system should partition them for you automatically, while you add them to an existing aggregate.

Use -simulate true argument first, to check the system going to do it properly and as you expect.

 

Just to make sure:

NetApp recommends having all the RAID groups in your aggregate with the same disk size except for the least one. In the last RAID group should be at least half of the size of the previous RAID groups. I.e. if you have one aggregate consisting out one RAID group (23 drives: 21 data + 2 parity) then you should add to next RAID group at least 12 SSD to make your last RAID group no less than half of previous.

And it looks like you should meet those requirements. Just to clarify, do not add just 3 drives to the new RAID group with existing 23.

Arne
8,764 Views

As I can't seem to get my system to use root-data-data partitioning (ADPv2 Iguess?)

 

Here is what I propose to do. I'm trying to get a validation from NetApp on this.

 

clus1::> storage aggregate add-disks -aggregate aggr1 -raidgroup new -disklist 6.1.23,6.2.0,6.2.1,6.2.2,6.2.3,6.2.4,6.2.5,6.2.6,6.2.7,6.2.8,6.2.9,6.2.10 -simulate true

 

Disks would be added to aggregate "aggr1" on node "clus1" in the following manner:

 

First Plex

 

RAID Group rg1, 12 disks (block checksum, raid_dp)
Position   Disk                      Type       Size
---------- ------------------------- ---------- ---------------
shared     6.1.23                    SSD        -
shared     6.2.0                     SSD        -
shared     6.2.1                     SSD        1.72TB
shared     6.2.2                     SSD        1.72TB
shared     6.2.3                     SSD        1.72TB
shared     6.2.4                     SSD        1.72TB
shared     6.2.5                     SSD        1.72TB
shared     6.2.6                     SSD        1.72TB
shared     6.2.7                     SSD        1.72TB
shared     6.2.8                     SSD        1.72TB
shared     6.2.9                     SSD        1.72TB
shared     6.2.10                    SSD        1.72TB

 

Aggregate capacity available for volume use would be increased by 15.48TB.

The following disks would be partitioned: 6.2.0, 6.2.1, 6.2.2, 6.2.3, 6.2.4, 6.2.5, 6.2.6, 6.2.7, 6.2.8, 6.2.9, 6.2.10.

 

clus1::>

 

Any thoughts?

mohammedimrankhan
8,282 Views

Hi Arne,

 

Did you manage to solve this?

I am in a similar situation. I am only adding 2 drives to a shelf which currently only has 12 drives.  

Arne
8,208 Views

I have used the solution described twice. It works without problems. Keep In mind that the limit is 48 drives. You can use the existing spare drive to partition the new drives as I did in my example. 

 

Best of luck to you.

 

With regards

Arne

mohammedimrankhan
8,196 Views
 

hal
4,721 Views

Doesn't this result in what was a spare (6.1.23) becoming a disk within the newly created rg1?  Are you left without a spare?

Public