Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
I messed up assigning owner to one of the newly added disks (ONTAP 9.5P6, AFF A200). Possible fix?
2024-03-05
11:07 PM
785 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I added 6x 960 GB SSDs to increase capacity of out AFF A200. When assigning owner to one of the disks, I might have misclicked or made some other mistake, because one of the disks shows Container type as Aggregate with name aggr0(1). When I try to click on it in GUI it takes me nowhere. New disks are 1.0.12 to 1.0.17. Is there any way to fix this?
Console output:
L1-ST1::> cluster show
Node Health Eligibility
--------------------- ------- ------------
L1-ST1-01 true true
L1-ST1-02 true true
2 entries were displayed.
L1-ST1::> aggr status
Aggregate Size Available Used% State #Vols Nodes RAID Status
--------- -------- --------- ----- ------- ------ ---------------- ------------
aggr0 368.4GB 17.85GB 95% online 1 L1-ST1-01 raid_dp,
normal
aggr0_L1_ST1_02_0
368.4GB 17.85GB 95% online 1 L1-ST1-02 raid_dp,
normal
aggr_L1_ST1_01
2.97TB 1.51TB 49% online 4 L1-ST1-01 raid_dp,
normal
aggr_L1_ST1_02
2.97TB 1.42TB 52% online 2 L1-ST1-02 raid_dp,
normal
4 entries were displayed.
L1-ST1::> storage disk show
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
Info: This cluster has partitioned disks. To get a complete list of spare disk capacity use "storage aggregate show-spare-disks".
1.0.0 894.0GB 0 0 SSD shared aggr0_L1_ST1_02_0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-02
1.0.1 894.0GB 0 1 SSD shared aggr0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-01
1.0.2 894.0GB 0 2 SSD shared aggr0_L1_ST1_02_0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-02
1.0.3 894.0GB 0 3 SSD shared aggr0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-01
1.0.4 894.0GB 0 4 SSD shared aggr0_L1_ST1_02_0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-02
1.0.5 894.0GB 0 5 SSD shared aggr0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-01
1.0.6 894.0GB 0 6 SSD shared aggr0_L1_ST1_02_0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-02
1.0.7 894.0GB 0 7 SSD shared aggr0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-01
1.0.8 894.0GB 0 8 SSD shared aggr0_L1_ST1_02_0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-02
1.0.9 894.0GB 0 9 SSD shared aggr0, aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-01
1.0.10 894.0GB 0 10 SSD shared aggr_L1_ST1_01, aggr_L1_ST1_02
L1-ST1-02
1.0.11 894.0GB 0 11 SSD shared - L1-ST1-01
1.0.12 894.0GB 0 12 SSD spare Pool0 L1-ST1-02
1.0.13 894.0GB 0 13 SSD aggregate aggr0(1) L1-ST1-01
1.0.14 894.0GB 0 14 SSD spare Pool0 L1-ST1-02
1.0.15 894.0GB 0 15 SSD spare Pool0 L1-ST1-01
1.0.16 894.0GB 0 16 SSD spare Pool0 L1-ST1-02
1.0.17 894.0GB 0 17 SSD spare Pool0 L1-ST1-01
18 entries were displayed.
2 REPLIES 2
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
did you try to remove the disk and reassign it? via cli. Do not use the gui for something like this.
Best regards
Kai
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
No, I didn't. I won't risk destroying any aggregates containing data. Refurbished disks are cheap and I have 6 free slots. No further expansion of storage is planned for this unit. I just have to wait for one new disk to arrive, so I have at least one cold spare at all times.
Best regards
Klemen
