ONTAP Hardware

AFF Partitioning in 9.1

coreywanless
23,232 Views

I had such good luck with my last post. I'm going for 2 for 2. 😄

 

Current Ontap 9 Documentation states that in an AFF, you should assign disks 0-11 to node 1 and 12-24 to node 2. 

 

https://library.netapp.com/ecm/ecm_download_file/ECMLP2495116 (PG 😎

 

 

However, I see that we now have 2 data partitions:

 

xxxxx::*> disk partition show 9.0.1.*

                          Usable  Container     Container

Partition                 Size    Type          Name              Owner

------------------------- ------- ------------- ----------------- -----------------

9.0.1.P1                   1.72TB spare         Pool0             xxxxx

9.0.1.P2                   1.72TB spare         Pool0             xxxxx

9.0.1.P3                  59.72GB aggregate     /xxxrootxxx/plex0/rg0

                                                                  xxxxx

 

 

Should the new best practice be to assign all P1's to node 1 and all P2's to node 2?

 

 

Looking for some guidance.

 

Thanks in advance!

 

1 ACCEPTED SOLUTION

dirk_ecker
23,129 Views

That is correct.

 

Root-Data-Data (R-D2) partitioning divides SSDs into 2 large data partitions and 1 small root partition. One data partition per SSD is assigned to each node in the HA pair.

 

Only one SSD is required for hot spare partitions. This results in more usable capacity.

 

You then create an aggregate with a raid size of 23 (21 data + 2 parity) on each node.

 

I hope this helps!

 

 

View solution in original post

22 REPLIES 22

dirk_ecker
23,130 Views

That is correct.

 

Root-Data-Data (R-D2) partitioning divides SSDs into 2 large data partitions and 1 small root partition. One data partition per SSD is assigned to each node in the HA pair.

 

Only one SSD is required for hot spare partitions. This results in more usable capacity.

 

You then create an aggregate with a raid size of 23 (21 data + 2 parity) on each node.

 

I hope this helps!

 

 

andris
22,755 Views

The extra data partition in the root-data-data version doesn't really change the disk assignment guidance for the physical (container) disks in an AFF system.  We're still say saying half of the shelf is owned by node 1 and the other half is owned by node 2. 

 

See step 9 in this KB:

https://kb.netapp.com/support/s/article/ka31A00000012lQ/How-to-convert-or-initialize-a-system-for-Root-Data-Data-Partitioning

 

davidrnexon
22,627 Views

Hi Andris, the KB article seems to talk about the advanced disk partitioning with 2 partitions, not the newer enahnced ADP 3 partition layout. I've been looking for some more documentation on the 3 partition layout but can't find any links or sections in the 9.1 documentation. Would you have a KB or documentation link for it ?

andris
22,617 Views

That's the right KB...

 

root-data-data = 3 partitions.

 

Only AFF systems running 9.0+ are supported for root-data-data partitioning.

davidrnexon
22,572 Views

Is best practice for 3 partitions root+data+data assigned to the same node, or would you assign 1 data partition to the HA partner, or doesn't it matter ?

 

FYI - I'm enquiring about A300's running 9.1

dirk_ecker
22,495 Views

The container partition (the disk itself) is assigned to one node. The data partitions are assigned to both nodes, so one node owns the root partition and one data partition, while the other node owns the second data partition.

 

You might want to take a look at Understanding root-data partitioning for additional information.

davidrnexon
22,442 Views

Hi Dirk,

 

After a re-initialization of the system with 1 shelf the disk assignment is as follows:

 

Disk Slot: 0 - 11 assigned to node 2

Disk Slot: 12 - 23 assigned to node 1

 

Disks 0 - 1 partitions 1,2,3 are assigned to node 2 (partition 1 and 2 being data and partition 3 being root)

Disks 12 -23 parititions 1,2,3 are assigned to node 1 (partition 1 and 2 being data and partition 3 being root)

 

According to the documentation in the "understanding root-data parititioning"  it states:

 

Root-data-data partitioning creates one small partition as the root partition and two larger, equally sized partitions for data as shown in the following illustration.

Creating two data partitions enables the same solid-state drive (SSD) to be shared between two nodes and two aggregates.

 

After initialization of the system the 2 data partitions of a disk are not shared amongst nodes.

 

If I remove ownership of 1 partition and try to assign the disk to the opposite node, the system does not allow me:

 

Node1> disk show -n
DISK                OWNER
------------          ------------- 
0b.00.23P2     Not Owned 

 

Node1> disk assign 0b.00.23P2
disk assign: Cannot assign "0b.00.23P2" from this node. Try the command again on node Node2

davidrnexon
22,427 Views

I had a support ticket open along side these posts which pointed me in the right direction. Finally got to the bottom of it if and hopefully helps out some people:

 

So from the factory or from a complete system re-initialization a node will own all data partitions of a disk, P1, P2 and P3.

 

If you want to set it up like in the documentation where you assign P2 to the opposite node, you have to do the following:

 

1. make sure the disk is listed as a spare. ::> storage aggregate show-spare-disks

2. enter advanced mode ::> priv set adv

3. To assign data P2 to the opposite node, in this case Node2. ::>storage disk assign -disk 2.0.0 -owner Node2 -data2 true -force true (use -data1 if you wish to reassign data1 partition)

 

FYI - On a 3.8TB SSD you will see around 1.74TB per data partition.

midi
22,408 Views

Hello everyone,

 

I want to ask a different question, how can we add 8 SSD to an existing AFF8040 which has 40 parition SSD? 

 

Regards.

aborzenkov
14,143 Views

@midi wrote:

I want to ask a different question, how can we add 8 SSD to an existing AFF8040 which has 40 parition SSD?


The question is not quite clear - do you want to partition new SSDs in the same way as existing ones?

midi
14,132 Views

Yes, you are right. Is it possible without being initialize the system.

 

Regards.

aborzenkov
13,658 Views

It is possible but to my knowledge involves diag level commands, so usual stanza applies - contact your NetApp representative or support for guidance.

davidrnexon
14,077 Views

Hi Midi you can add disks definately however you have to be aware of some maximums.

 

A max of 48 drives can be partitioned, 24 per node.

 

How are your drives assigned to your nodes ?

midi
14,038 Views

Hi davidrnexon,

Thanks for the info. The customer has 40 partitioned disks, when we want to add 8 more disks to the system how can we set these disks as partition. As far as i did not find anything except initializing the system. But the system is in production.

Best Regards.

andris
14,019 Views

The disks should automatically be partitioned once they are assigned to the system.

Even if that doesn't happen, a whole spare disk will be automatically partitioned when it's added to an aggregate.

davidrnexon
11,621 Views

Hi Midi

 

First when the disk is inserted into the shelf it is unassigned. You then need to assign the disk to the node that you wish to take ownership. It is then a spare disk.

 

when you add a disk to the root aggregate of the node that owns the disk, that disk will then be partitioned with P1,P2,P3

 

However before you add the disk, please make sure you understand how your disks are laid out amongst nodes, especially taking into consideration my point earlier with 48 disk max limit, 24 per node. Once the disk is added to the aggregate and partitioned, you cannot simply just remove it if you make a mistake.

 

If you are unsure, best to log a support case and let them guide you through it.

Eric_Johnson
10,302 Views

As others have noted, the drives will be partitioned when they are added to the aggregate.  I would suggest using the -simulate true flag and pay attention to the amount of space it is adding, make sure is consistent w/ adding half disks.

LORENZO_CONTI
6,899 Views

Hello,

 

could someone can clarify it there could be some problem having for instance 12 ssd and physical ownership assigned in this way?

0-5 node1

6-11 node2

This is the schema that is used as factory default.

Thank you


andris
6,893 Views

I'd need more context to answer your particular question... are you referring to a controller's internal disks? Which model?

Are there 24 bays or 12?  Or are you referring to external storage being added (again, 12 or 24 bays)?

 

What I can say as a general comment that on AFF systems you should lay out drives in partially filled shelves in an "outside in" manner.  E.g. If 12 SSD's in a 24-bay shelf, place in 0-5 and 18-23.  Option 9b/ADP initialization will automatically assign the container disk ownership of 0-5 to node A and 18-23 to node B. Then, perform root-data-data partitioning.

LORENZO_CONTI
6,887 Views

Hello @andris ,

 

I was thinking to an A-220 with onboard slot. The system is now in production with 0-5 and 6-11 😞

Public