AFF

different number of disks in two controller root aggregate,why ? The system automatically assigns it

xywang1987
503 Views

 CD-OA-A400-01::> run -node CD-OA-A400-01-A sysconfig -r
Aggregate CD_OA_A400_01_A_aggr0 (online, raid_dp) (block checksums)
Plex /CD_OA_A400_01_A_aggr0/plex0 (online, normal, active, pool0)
RAID group /CD_OA_A400_01_A_aggr0/plex0/rg0 (normal, block checksums)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.12P3 0a 0 12 SA:A 0 SSD N/A 31935/65402880 31943/65419264
parity 0d.00.13P3 0d 0 13 SA:B 0 SSD N/A 31935/65402880 31943/65419264
data 0a.00.14P3 0a 0 14 SA:A 0 SSD N/A 31935/65402880 31943/65419264
data 0d.00.15P3 0d 0 15 SA:B 0 SSD N/A 31935/65402880 31943/65419264
data 0a.00.16P3 0a 0 16 SA:A 0 SSD N/A 31935/65402880 31943/65419264


Pool1 spare disks (empty)

 

CD-OA-A400-01::> run -node CD-OA-A400-01-B aggr status -r
Aggregate CD_OA_A400_01_B_aggr0 (online, raid_dp) (block checksums)
Plex /CD_OA_A400_01_B_aggr0/plex0 (online, normal, active, pool0)
RAID group /CD_OA_A400_01_B_aggr0/plex0/rg0 (normal, block checksums)

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.00.0P3 0a 0 0 SA:B 0 SSD N/A 31935/65402880 31943/65419264
parity 0d.00.1P3 0d 0 1 SA:A 0 SSD N/A 31935/65402880 31943/65419264
data 0a.00.2P3 0a 0 2 SA:B 0 SSD N/A 31935/65402880 31943/65419264
data 0d.00.3P3 0d 0 3 SA:A 0 SSD N/A 31935/65402880 31943/65419264
data 0a.00.4P3 0a 0 4 SA:B 0 SSD N/A 31935/65402880 31943/65419264
data 0d.00.5P3 0d 0 5 SA:A 0 SSD N/A 31935/65402880 31943/65419264
data 0a.00.6P3 0a 0 6 SA:B 0 SSD N/A 31935/65402880 31943/65419264
data 0d.00.7P3 0d 0 7 SA:A 0 SSD N/A 31935/65402880 31943/65419264


Pool1 spare disks (empty)

1 ACCEPTED SOLUTION

TMACMD
456 Views

Its your slots.

For a fresh system on an AFF, node 1 grabs the left disks (0-11) and node 2 grabs the right disk (12-23).

It seems you overloaded to the left and node 1 got 0-11 and node 2 got 12-16

View solution in original post

3 REPLIES 3

xywang1987
190 Views

and in node A,have 3 unpartition disks,ontap version is 9.8P12

 

CD-OA-A400-01::storage disk*> run -node CD-OA-A400-01-A aggr status -s

Pool1 spare disks (empty)

Pool0 spare disks

RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block checksum
spare 0d.00.17P3 0d 0 17 SA:B 0 SSD N/A 31935/65402880 31943/65419264 (fast zeroed)
spare 0a.00.0P1 0a 0 0 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.2P1 0a 0 2 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.4P1 0a 0 4 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.6P1 0a 0 6 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.8P1 0a 0 8 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.12P1 0a 0 12 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.14P1 0a 0 14 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.16P1 0a 0 16 SA:A 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.1P1 0d 0 1 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.3P1 0d 0 3 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.5P1 0d 0 5 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.7P1 0d 0 7 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.13P1 0d 0 13 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.15P1 0d 0 15 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0d.00.17P1 0d 0 17 SA:B 0 SSD N/A 1815300/3717734912 1815308/3717751296 (fast zeroed)
spare 0a.00.10 0a 0 10 SA:A 0 SSD N/A 3662580/7500964352 3662830/7501476528 (not zeroed)
spare 0d.00.9 0d 0 9 SA:B 0 SSD N/A 3662580/7500964352 3662830/7501476528 (not zeroed)
spare 0d.00.11 0d 0 11 SA:B 0 SSD N/A 3662580/7500964352 3662830/7501476528 (not zeroed)

Ontapforrum
476 Views

Is this a new system ? What Model/ONTAP? Ideally, Just 3 disks are enough for the root_aggregate. Looks like more disks were added to increase the root aggregate ? But, it serves no purpose. The root aggregate is dedicated to the node’s root volume only. ONTAP prevents you from creating other volumes in the root aggregate.


You will need to re-initialize the system to use the correct set of disks.  That would leave  '7' spare disks, you can use '5' to create data_aggr and 2 you can keep as spare.

 

If you wish you can go for ADP: This will enable you to extract more space from single disk by using partitioning mechanism.

 

ADP: With Advance Disk Partitioning (ADP), a physical disk is divided into a small number of partitions at the storage level, and each partition is treated by RAID and HA as a logical disk. Aggregates are provisioned from partitions, instead of from whole disks.

 

https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/What_are_the_steps_followed_to__configure_advanced_drive_partitioning_...

 

https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/How_to_identify_if_ADP_is_configured_in_ONTAP

 

TMACMD
457 Views

Its your slots.

For a fresh system on an AFF, node 1 grabs the left disks (0-11) and node 2 grabs the right disk (12-23).

It seems you overloaded to the left and node 1 got 0-11 and node 2 got 12-16

Public