Effective December 3, NetApp adopts Microsoft’s Business-to-Customer (B2C) identity management to simplify and provide secure access to NetApp resources.
For accounts that did not pre-register (prior to Dec 3), access to your NetApp data may take up to 1 hour as your legacy NSS ID is synchronized to the new B2C identity.
To learn more, read the FAQ and watch the video.
Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.

ONTAP Discussions

Disk partitioning on AFF-A220 with additional shelf

DSI_SR

Hello,

 

We bought an AFF-A220 with 24 x 960 Gb disks (X371_S1643960ATE), connected to a shelf with 24 disks of the same model.

This AFF has been added to an existing 6-node cluster (AFF-A200, FAS2554), in 9.5P5.

 

On the controller shelf, the disks are partitioned with root-data1-data2 scheme, which is expected.

We created 2 aggregates, one that spans on P1 partitions, the other one on P2 partitions:

 

storage1::*> disk partition show 5.0.0.*
                          Usable  Container     Container
Partition                 Size    Type          Name              Owner
------------------------- ------- ------------- ----------------- -----------------
5.0.0.P1                  435.3GB aggregate     /ssd4/plex0/rg0   storage1-02
5.0.0.P2                  435.3GB aggregate     /ssd3/plex0/rg0   storage1-01
5.0.0.P3                  23.39GB aggregate     /aggr0_storage1_01_0/plex0/rg0
                                                                  storage1-01

 

(similar output for 22 other disks, the last one is a spare disk)

 

The problem is with the extension shelf, the disks were not partitioned, and it seems the only way to have them partitioned is to add the disks to the root aggregate (seen with output of "storage aggregate add-disks" and "-simulate" flag).
We would like to avoid having the root aggregates using the disks of the extension shelf.

If we try to add the disks to one of the data aggregates, the full disks would be used, without partitioning.

 

As a workaround, we were able to manually partition the disks by using "disk partition -n 2 <disk_ref>" in node shell, which gives us this configuration:

 

storage1::*> disk partition show 5.1.0.*
                          Usable  Container     Container
Partition                 Size    Type          Name              Owner
------------------------- ------- ------------- ----------------- -----------------
5.1.0.P1                  447.0GB aggregate     /ssd6/plex0/rg0   storage1-02
5.1.0.P2                  447.0GB aggregate     /ssd5/plex0/rg0   storage1-01
2 entries were displayed.

Then we are able to create two additional data aggregates that span on P1 and P2 partitions.


For us, this is an ideal setup, since we don't have a P3 partition here for the root aggregate (and we don't loose about 24 Gb per disk), and the root aggregates remain of the controller shelf.

 

The question is: is this a supported setup ? If not, what would be a clean way to have data partitioning on the extension shelf ?

 

Thanks in advance!

3 REPLIES 3

GidonMarcus

Hi.

 

i don't see why would that be unsupported. looks completely legit (i'm a not NetApp employee and can't confirm supportability - but i don't see a reason it won't be). try to keep different partition sizes on different RAID group (but even that is supported, it will just cause you to loose some space)

 

as for how should it be done - reading some KB's, it's seems that if you adding the disks to existing partitioned-based RAID group - it's should automatically partition these disks. did you change the RAID size of the aggr before running the simulation?

 

Gidi

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

Martin_OSS

Hi,

 

i have an additional question related to this topic. If i did a partitioning of the disk in the additional shelf, why is it not possible to add the P1 & P2 partitions to the same aggregate ?

 

Error: command failed: Addition of disks would fail for aggregate "aggr1_ssd_01" on node "fe00nas05cp-01". Reason: 16 disks needed from Pool0, but no matching disks are available in that
pool.

 

Both partitions are in the same pool (Pool0). With the internal SSDs (A700s) both data partitions in the same aggregate possible.

 

 

 

 

GidonMarcus

i believe that's to prevent a single physical disk failure to be an actual two disk failure (from the RAID perspective)

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK
Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public