ONTAP Hardware
ONTAP Hardware
Hi,
I just finish installing Ontap 9.7P6 on FAS2740. What I notice is that there is an aggr that I can't seem to delete or modify. It does not show up when I type aggr show (Only the root aggr) but I can see it when I type storage disk show -container-name *aggr name*
It's bugging me so much because it's using 13 TB disk space raid-tec.
Please help.
See pictures.
Solved! See The Solution
9a/9b - c is whole, how you have it currently.
do this ->
opt 9a on controller 1
...wait for it to finish
opt 9a on controller 2
...wait for it to finish
opt 9b on controller 1
...wait for it to finish
opt 9b on controller 2
configure the cluster like you normally would.
Did you intentionally deploy this without ADP?
Also, were any of the disks moved from another controller?
What's the output of the following:
node run -node * aggr status
set d; debug vreport show
Hi there,
I am sorry I am new to NetApp I am not sure about ADP. the print out for
node run -node * aggr status
2 entries were acted on.
Node: SeClus_1
Aggr State Status Options
aggr0_SeClus_1 online raid_dp, aggr root, nosnap=on
64-bit
SeClus01_1_NL_SAS_1 failed raid_tec, aggr raidsize=14
partial
64-bit
Node: SeClus_2
Aggr State Status Options
aggr0_SeClus_2 online raid_dp, aggr root, nosnap=on
64-bit
SeClus01_1_NL_SAS_1 failed raid_tec, aggr raidsize=14
partial
64-bit
***************************
set d; debug vreport show
Warning: These diagnostic commands are for use by NetApp personnel only.
Do you want to continue? {y|n}: y
aggregate Differences:
Name Reason Attributes
-------- ------- ---------------------------------------------------
SeClus01_1_NL_SAS_1(649dc2bd-381a-48f6-a830-5688dc2bec50)
Duplicate aggregates present in WAFL Only
Node Name: SeClus_1
Aggregate UUID: 649dc2bd-381a-48f6-a830-5688dc2bec50
Aggregate State: failed
Aggregate Raid Status: raid_tec, partial
Aggregate HA Policy: sfo
Is Aggregate Root: false
Is Composite Aggregate: false
Duplicate Aggregate Info:
Node Name: SeClus_2
Aggregate UUID: 649dc2bd-381a-48f6-a830-5688dc2bec50
*Aggregate Name: SeClus01_1_NL_SAS_1
****************************************
The hard drive is 10TB and uses raid-tec by default.
ADP - Advanced Drive Partitioning. It will partition the drives up so you'll have a smaller part for root and a larger part for Data. 55TB is a lot to waste for root agars. https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-B745CFA8-2C4C-47F1-A984-B95D3EBCAAB4.html
Right now it looks like your root aggrs are made up up 3x8TB drives.
Can you post the output of "storage disk show" and "storage disk show -partition-ownership"
storage disk show
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.11.0 8.89TB 11 0 FSAS aggregate aggr0_SeClus_2
SeClus_2
1.11.1 8.89TB 11 1 FSAS aggregate aggr0_SeClus_1
SeClus_1
1.11.2 8.89TB 11 2 FSAS aggregate aggr0_SeClus_2
SeClus_2
1.11.3 8.89TB 11 3 FSAS aggregate aggr0_SeClus_1
SeClus_1
1.11.4 8.89TB 11 4 FSAS spare Pool0 SeClus_1
1.11.5 8.89TB 11 5 FSAS spare Pool0 SeClus_1
1.11.6 8.89TB 11 6 FSAS spare Pool0 SeClus_1
1.11.7 8.89TB 11 7 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.8 8.89TB 11 8 FSAS spare Pool0 SeClus_1
1.11.9 8.89TB 11 9 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.10 8.89TB 11 10 FSAS spare Pool0 SeClus_1
1.11.11 8.89TB 11 11 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.12 8.89TB 11 12 FSAS spare Pool0 SeClus_1
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.11.13 8.89TB 11 13 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.14 8.89TB 11 14 FSAS spare Pool0 SeClus_1
1.11.15 8.89TB 11 15 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.16 8.89TB 11 16 FSAS spare Pool0 SeClus_1
1.11.17 8.89TB 11 17 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.18 8.89TB 11 18 FSAS spare Pool0 SeClus_1
1.11.19 8.89TB 11 19 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.20 8.89TB 11 20 FSAS spare Pool0 SeClus_1
1.11.21 8.89TB 11 21 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.22 8.89TB 11 22 FSAS spare Pool0 SeClus_1
1.11.23 8.89TB 11 23 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.24 8.89TB 11 24 FSAS spare Pool0 SeClus_1
1.11.25 8.89TB 11 25 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.11.26 8.89TB 11 26 FSAS spare Pool0 SeClus_1
1.11.27 8.89TB 11 27 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.28 8.89TB 11 28 FSAS spare Pool0 SeClus_1
1.11.29 8.89TB 11 29 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.30 8.89TB 11 30 FSAS spare Pool0 SeClus_1
1.11.31 8.89TB 11 31 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.32 8.89TB 11 32 FSAS spare Pool0 SeClus_1
1.11.33 8.89TB 11 33 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_1
1.11.34 8.89TB 11 34 FSAS spare Pool0 SeClus_1
1.11.35 8.89TB 11 35 FSAS spare Pool0 SeClus_1
1.11.36 8.89TB 11 36 FSAS spare Pool0 SeClus_1
1.11.37 8.89TB 11 37 FSAS spare Pool0 SeClus_1
1.11.38 8.89TB 11 38 FSAS spare Pool0 SeClus_1
1.11.39 8.89TB 11 39 FSAS spare Pool0 SeClus_1
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.11.40 8.89TB 11 40 FSAS spare Pool0 SeClus_1
1.11.41 8.89TB 11 41 FSAS spare Pool0 SeClus_1
1.11.42 8.89TB 11 42 FSAS spare Pool0 SeClus_1
1.11.43 8.89TB 11 43 FSAS spare Pool0 SeClus_1
1.11.44 8.89TB 11 44 FSAS spare Pool0 SeClus_1
1.11.45 8.89TB 11 45 FSAS spare Pool0 SeClus_1
1.11.46 8.89TB 11 46 FSAS spare Pool0 SeClus_1
1.11.47 8.89TB 11 47 FSAS spare Pool0 SeClus_1
1.11.48 8.89TB 11 48 FSAS spare Pool0 SeClus_1
1.11.49 8.89TB 11 49 FSAS spare Pool0 SeClus_1
1.11.50 8.89TB 11 50 FSAS spare Pool0 SeClus_1
1.11.51 8.89TB 11 51 FSAS spare Pool0 SeClus_1
1.11.52 8.89TB 11 52 FSAS spare Pool0 SeClus_1
1.11.53 8.89TB 11 53 FSAS spare Pool0 SeClus_1
1.11.54 8.89TB 11 54 FSAS spare Pool0 SeClus_1
1.11.55 8.89TB 11 55 FSAS spare Pool0 SeClus_1
1.11.56 8.89TB 11 56 FSAS spare Pool0 SeClus_1
1.11.57 8.89TB 11 57 FSAS spare Pool0 SeClus_1
1.11.58 8.89TB 11 58 FSAS spare Pool0 SeClus_1
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.11.59 8.89TB 11 59 FSAS spare Pool0 SeClus_1
1.22.0 8.89TB 22 0 FSAS aggregate aggr0_SeClus_2
SeClus_2
1.22.1 8.89TB 22 1 FSAS spare Pool0 SeClus_2
1.22.2 8.89TB 22 2 FSAS spare Pool0 SeClus_2
1.22.3 8.89TB 22 3 FSAS spare Pool0 SeClus_2
1.22.4 8.89TB 22 4 FSAS spare Pool0 SeClus_2
1.22.5 8.89TB 22 5 FSAS spare Pool0 SeClus_2
1.22.6 8.89TB 22 6 FSAS spare Pool0 SeClus_2
1.22.7 8.89TB 22 7 FSAS spare Pool0 SeClus_2
1.22.8 8.89TB 22 8 FSAS spare Pool0 SeClus_2
1.22.9 8.89TB 22 9 FSAS spare Pool0 SeClus_2
1.22.10 8.89TB 22 10 FSAS spare Pool0 SeClus_2
1.22.11 8.89TB 22 11 FSAS aggregate SeClus01_1_NL_SAS_1
SeClus_2
1.22.12 8.89TB 22 12 FSAS spare Pool0 SeClus_2
1.22.13 8.89TB 22 13 FSAS spare Pool0 SeClus_2
1.22.14 8.89TB 22 14 FSAS spare Pool0 SeClus_2
1.22.15 8.89TB 22 15 FSAS spare Pool0 SeClus_2
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.22.16 8.89TB 22 16 FSAS spare Pool0 SeClus_2
1.22.17 8.89TB 22 17 FSAS spare Pool0 SeClus_2
1.22.18 8.89TB 22 18 FSAS spare Pool0 SeClus_2
1.22.19 8.89TB 22 19 FSAS spare Pool0 SeClus_2
1.22.20 8.89TB 22 20 FSAS spare Pool0 SeClus_2
1.22.21 8.89TB 22 21 FSAS spare Pool0 SeClus_2
1.22.22 8.89TB 22 22 FSAS spare Pool0 SeClus_2
1.22.23 8.89TB 22 23 FSAS spare Pool0 SeClus_2
1.22.24 8.89TB 22 24 FSAS spare Pool0 SeClus_2
1.22.25 8.89TB 22 25 FSAS spare Pool0 SeClus_2
1.22.26 8.89TB 22 26 FSAS spare Pool0 SeClus_2
1.22.27 8.89TB 22 27 FSAS spare Pool0 SeClus_2
1.22.28 8.89TB 22 28 FSAS spare Pool0 SeClus_2
1.22.29 8.89TB 22 29 FSAS spare Pool0 SeClus_2
1.22.30 8.89TB 22 30 FSAS spare Pool0 SeClus_2
1.22.31 8.89TB 22 31 FSAS spare Pool0 SeClus_2
1.22.32 8.89TB 22 32 FSAS spare Pool0 SeClus_2
1.22.33 8.89TB 22 33 FSAS spare Pool0 SeClus_2
1.22.34 8.89TB 22 34 FSAS spare Pool0 SeClus_2
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.22.35 8.89TB 22 35 FSAS spare Pool0 SeClus_2
1.22.36 8.89TB 22 36 FSAS spare Pool0 SeClus_2
1.22.37 8.89TB 22 37 FSAS spare Pool0 SeClus_2
1.22.38 8.89TB 22 38 FSAS spare Pool0 SeClus_2
1.22.39 8.89TB 22 39 FSAS spare Pool0 SeClus_2
1.22.40 8.89TB 22 40 FSAS spare Pool0 SeClus_2
1.22.41 8.89TB 22 41 FSAS spare Pool0 SeClus_2
1.22.42 8.89TB 22 42 FSAS spare Pool0 SeClus_2
1.22.43 8.89TB 22 43 FSAS spare Pool0 SeClus_2
1.22.44 8.89TB 22 44 FSAS spare Pool0 SeClus_2
1.22.45 8.89TB 22 45 FSAS spare Pool0 SeClus_2
1.22.46 8.89TB 22 46 FSAS spare Pool0 SeClus_2
1.22.47 8.89TB 22 47 FSAS spare Pool0 SeClus_2
1.22.48 8.89TB 22 48 FSAS spare Pool0 SeClus_2
1.22.49 8.89TB 22 49 FSAS spare Pool0 SeClus_2
1.22.50 8.89TB 22 50 FSAS spare Pool0 SeClus_2
1.22.51 8.89TB 22 51 FSAS spare Pool0 SeClus_2
1.22.52 8.89TB 22 52 FSAS spare Pool0 SeClus_2
1.22.53 8.89TB 22 53 FSAS spare Pool0 SeClus_2
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
1.22.54 8.89TB 22 54 FSAS spare Pool0 SeClus_2
1.22.55 8.89TB 22 55 FSAS spare Pool0 SeClus_2
1.22.56 8.89TB 22 56 FSAS spare Pool0 SeClus_2
1.22.57 8.89TB 22 57 FSAS aggregate aggr0_SeClus_1
SeClus_1
1.22.58 8.89TB 22 58 FSAS spare Pool0 SeClus_2
1.22.59 8.89TB 22 59 FSAS spare Pool0 SeClus_1
120 entries were displayed.
***************************************************************
storage disk show -partition-ownership
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
1.11.0 Container SeClus_2 SeClus_2 538134051 538134051
1.11.1 Container SeClus_1 SeClus_1 538133895 538133895
1.11.2 Container SeClus_2 SeClus_2 538134051 538134051
1.11.3 Container SeClus_1 SeClus_1 538133895 538133895
1.11.4 Container SeClus_1 SeClus_1 538133895 538133895
1.11.5 Container SeClus_1 SeClus_1 538133895 538133895
1.11.6 Container SeClus_1 SeClus_1 538133895 538133895
1.11.7 Container SeClus_1 SeClus_1 538133895 538133895
1.11.8 Container SeClus_1 SeClus_1 538133895 538133895
1.11.9 Container SeClus_1 SeClus_1 538133895 538133895
1.11.10 Container SeClus_1 SeClus_1 538133895 538133895
1.11.11 Container SeClus_1 SeClus_1 538133895 538133895
1.11.12 Container SeClus_1 SeClus_1 538133895 538133895
1.11.13 Container SeClus_1 SeClus_1 538133895 538133895
1.11.14 Container SeClus_1 SeClus_1 538133895 538133895
1.11.15 Container SeClus_1 SeClus_1 538133895 538133895
1.11.16 Container SeClus_1 SeClus_1 538133895 538133895
1.11.17 Container SeClus_1 SeClus_1 538133895 538133895
1.11.18 Container SeClus_1 SeClus_1 538133895 538133895
1.11.19 Container SeClus_1 SeClus_1 538133895 538133895
1.11.20 Container SeClus_1 SeClus_1 538133895 538133895
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
1.11.21 Container SeClus_1 SeClus_1 538133895 538133895
1.11.22 Container SeClus_1 SeClus_1 538133895 538133895
1.11.23 Container SeClus_1 SeClus_1 538133895 538133895
1.11.24 Container SeClus_1 SeClus_1 538133895 538133895
1.11.25 Container SeClus_1 SeClus_1 538133895 538133895
1.11.26 Container SeClus_1 SeClus_1 538133895 538133895
1.11.27 Container SeClus_1 SeClus_1 538133895 538133895
1.11.28 Container SeClus_1 SeClus_1 538133895 538133895
1.11.29 Container SeClus_1 SeClus_1 538133895 538133895
1.11.30 Container SeClus_1 SeClus_1 538133895 538133895
1.11.31 Container SeClus_1 SeClus_1 538133895 538133895
1.11.32 Container SeClus_1 SeClus_1 538133895 538133895
1.11.33 Container SeClus_1 SeClus_1 538133895 538133895
1.11.34 Container SeClus_1 SeClus_1 538133895 538133895
1.11.35 Container SeClus_1 SeClus_1 538133895 538133895
1.11.36 Container SeClus_1 SeClus_1 538133895 538133895
1.11.37 Container SeClus_1 SeClus_1 538133895 538133895
1.11.38 Container SeClus_1 SeClus_1 538133895 538133895
1.11.39 Container SeClus_1 SeClus_1 538133895 538133895
1.11.40 Container SeClus_1 SeClus_1 538133895 538133895
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
1.11.41 Container SeClus_1 SeClus_1 538133895 538133895
1.11.42 Container SeClus_1 SeClus_1 538133895 538133895
1.11.43 Container SeClus_1 SeClus_1 538133895 538133895
1.11.44 Container SeClus_1 SeClus_1 538133895 538133895
1.11.45 Container SeClus_1 SeClus_1 538133895 538133895
1.11.46 Container SeClus_1 SeClus_1 538133895 538133895
1.11.47 Container SeClus_1 SeClus_1 538133895 538133895
1.11.48 Container SeClus_1 SeClus_1 538133895 538133895
1.11.49 Container SeClus_1 SeClus_1 538133895 538133895
1.11.50 Container SeClus_1 SeClus_1 538133895 538133895
1.11.51 Container SeClus_1 SeClus_1 538133895 538133895
1.11.52 Container SeClus_1 SeClus_1 538133895 538133895
1.11.53 Container SeClus_1 SeClus_1 538133895 538133895
1.11.54 Container SeClus_1 SeClus_1 538133895 538133895
1.11.55 Container SeClus_1 SeClus_1 538133895 538133895
1.11.56 Container SeClus_1 SeClus_1 538133895 538133895
1.11.57 Container SeClus_1 SeClus_1 538133895 538133895
1.11.58 Container SeClus_1 SeClus_1 538133895 538133895
1.11.59 Container SeClus_1 SeClus_1 538133895 538133895
1.22.0 Container SeClus_2 SeClus_2 538134051 538134051
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
1.22.1 Container SeClus_2 SeClus_2 538134051 538134051
1.22.2 Container SeClus_2 SeClus_2 538134051 538134051
1.22.3 Container SeClus_2 SeClus_2 538134051 538134051
1.22.4 Container SeClus_2 SeClus_2 538134051 538134051
1.22.5 Container SeClus_2 SeClus_2 538134051 538134051
1.22.6 Container SeClus_2 SeClus_2 538134051 538134051
1.22.7 Container SeClus_2 SeClus_2 538134051 538134051
1.22.8 Container SeClus_2 SeClus_2 538134051 538134051
1.22.9 Container SeClus_2 SeClus_2 538134051 538134051
1.22.10 Container SeClus_2 SeClus_2 538134051 538134051
1.22.11 Container SeClus_2 SeClus_2 538134051 538134051
1.22.12 Container SeClus_2 SeClus_2 538134051 538134051
1.22.13 Container SeClus_2 SeClus_2 538134051 538134051
1.22.14 Container SeClus_2 SeClus_2 538134051 538134051
1.22.15 Container SeClus_2 SeClus_2 538134051 538134051
1.22.16 Container SeClus_2 SeClus_2 538134051 538134051
1.22.17 Container SeClus_2 SeClus_2 538134051 538134051
1.22.18 Container SeClus_2 SeClus_2 538134051 538134051
1.22.19 Container SeClus_2 SeClus_2 538134051 538134051
1.22.20 Container SeClus_2 SeClus_2 538134051 538134051
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
1.22.21 Container SeClus_2 SeClus_2 538134051 538134051
1.22.22 Container SeClus_2 SeClus_2 538134051 538134051
1.22.23 Container SeClus_2 SeClus_2 538134051 538134051
1.22.24 Container SeClus_2 SeClus_2 538134051 538134051
1.22.25 Container SeClus_2 SeClus_2 538134051 538134051
1.22.26 Container SeClus_2 SeClus_2 538134051 538134051
1.22.27 Container SeClus_2 SeClus_2 538134051 538134051
1.22.28 Container SeClus_2 SeClus_2 538134051 538134051
1.22.29 Container SeClus_2 SeClus_2 538134051 538134051
1.22.30 Container SeClus_2 SeClus_2 538134051 538134051
1.22.31 Container SeClus_2 SeClus_2 538134051 538134051
1.22.32 Container SeClus_2 SeClus_2 538134051 538134051
1.22.33 Container SeClus_2 SeClus_2 538134051 538134051
1.22.34 Container SeClus_2 SeClus_2 538134051 538134051
1.22.35 Container SeClus_2 SeClus_2 538134051 538134051
1.22.36 Container SeClus_2 SeClus_2 538134051 538134051
1.22.37 Container SeClus_2 SeClus_2 538134051 538134051
1.22.38 Container SeClus_2 SeClus_2 538134051 538134051
1.22.39 Container SeClus_2 SeClus_2 538134051 538134051
1.22.40 Container SeClus_2 SeClus_2 538134051 538134051
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
1.22.41 Container SeClus_2 SeClus_2 538134051 538134051
1.22.42 Container SeClus_2 SeClus_2 538134051 538134051
1.22.43 Container SeClus_2 SeClus_2 538134051 538134051
1.22.44 Container SeClus_2 SeClus_2 538134051 538134051
1.22.45 Container SeClus_2 SeClus_2 538134051 538134051
1.22.46 Container SeClus_2 SeClus_2 538134051 538134051
1.22.47 Container SeClus_2 SeClus_2 538134051 538134051
1.22.48 Container SeClus_2 SeClus_2 538134051 538134051
1.22.49 Container SeClus_2 SeClus_2 538134051 538134051
1.22.50 Container SeClus_2 SeClus_2 538134051 538134051
1.22.51 Container SeClus_2 SeClus_2 538134051 538134051
1.22.52 Container SeClus_2 SeClus_2 538134051 538134051
1.22.53 Container SeClus_2 SeClus_2 538134051 538134051
1.22.54 Container SeClus_2 SeClus_2 538134051 538134051
1.22.55 Container SeClus_2 SeClus_2 538134051 538134051
1.22.56 Container SeClus_2 SeClus_2 538134051 538134051
1.22.57 Container SeClus_1 SeClus_1 538133895 538133895
1.22.58 Container SeClus_2 SeClus_2 538134051 538134051
1.22.59 Container SeClus_1 SeClus_1 538133895 538133895
120 entries were displayed.
I did use ADP.
SeClus01_1_NL_SAS_1
0B 0B 0% failed 0 SeClus_1 raid_tec,
partial
aggr0_SeClus_1
7.60TB 377.5GB 95% online 1 SeClus_1 raid_dp,
normal
aggr0_SeClus_2
7.60TB 377.5GB 95% online 1 SeClus_2 raid_dp,
normal
3 entries were displayed.
aggr0_SeClus_1 and aggr0_SeClus_2 is the root and consists of 3 disks 2 for parity.
it's not. the output from -partition-ownership tells me that.
here's from a system with ADP:
WOPR::> storage disk show -partition-ownership
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
1.0.0 Container WOPR-02 WOPR-02 1111111111 1111111111
Root WOPR-02 WOPR-02 1111111111 1111111111
Data WOPR-02 WOPR-02 1111111111 1111111111
1.0.1 Container WOPR-01 WOPR-01 2222222222 2222222222
Root WOPR-01 WOPR-01 2222222222 2222222222
Data WOPR-01 WOPR-01 2222222222 2222222222
disks would also show up as "shared"
OPR::> disk show
Usable Disk Container Container
Disk Size Shelf Bay Type Type Name Owner
---------------- ---------- ----- --- ------- ----------- --------- --------
Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
1.0.0 836.9GB 0 0 SAS shared N2_aggr1, root_aggr0_N2 WOPR-02
1.0.1 836.9GB 0 1 SAS shared N1_aggr1, root_aggr0_N1 WOPR-01
1.0.2 836.9GB 0 2 SAS shared N2_aggr1, root_aggr0_N2 WOPR-02
1.0.3 836.9GB 0 3 SAS shared N1_aggr1, root_aggr0_N1 WOPR-01
1.0.4 836.9GB 0 4 SAS shared N2_aggr1, root_aggr0_N2 WOPR-02
1.0.5 836.9GB 0 5 SAS shared N1_aggr1, root_aggr0_N1 WOPR-01
1.0.6 836.9GB 0 6 SAS shared N2_aggr1, root_aggr0_N2 WOPR-02
1.0.7 836.9GB 0 7 SAS shared N1_aggr1, root_aggr0_N1 WOPR-01
1.0.8 836.9GB 0 8 SAS shared N2_aggr1, root_aggr0_N2 WOPR-02
1.0.9 836.9GB 0 9 SAS shared N1_aggr1, root_aggr0_N1 WOPR-01
1.0.10 836.9GB 0 10 SAS shared N2_aggr1 WOPR-02
1.0.11 836.9GB 0 11 SAS shared N1_aggr1 WOPR-01
1.0.12 836.9GB 0 12 SAS shared N2_aggr1 WOPR-02
1.0.13 836.9GB 0 13 SAS shared - WOPR-01
1.0.14 836.9GB 0 14 SAS shared - WOPR-02
1.0.15 836.9GB 0 15 SAS shared N1_aggr1 WOPR-01
Are you able to do a re-init? You would get a lot of space back and it would clear out that screwy aggr too.
yes, I can but I tried that before and it keeps coming back. is that any specific way to do it. should i go with option 9a-c
9a/9b - c is whole, how you have it currently.
do this ->
opt 9a on controller 1
...wait for it to finish
opt 9a on controller 2
...wait for it to finish
opt 9b on controller 1
...wait for it to finish
opt 9b on controller 2
configure the cluster like you normally would.
Thanks will give it a try and let you know.
@SpindleNinja So I tried it and all is well. one thing I notice that caused the issue is that I tried to do ADP before and I used option9b which created raid -TEC because the disk is 6TB and above. when that failed I did not notice. However, when I was following the steps you said I notice that so I performed 9a and 9c then go through the normal setup. all works great thus far. thanks for your help.
Failed on the second controller?
9c still give you whole drives. which eats a lot of space for the roots.
Nothing failed.
I know that, but for the root, It can't use option 9b because one single disk is over 6TB and when the disk is that large 9b does not work. it gives an error cant create a root vol because 5 of 7 disks are available. when I select 9c it creates three disk raid DP.
aggr0_Seclus_1_0
7.60TB 377.5GB 95% online 1 Seclus_1 raid_dp,
normal
aggr0_Seclus_2_0
7.60TB 377.5GB 95% online 1 Seclus_2 raid_dp,
normal
Did the message looks like this?
Unable to create root aggregate: 5 disks specified, but at least 7 disks are required for raid_tec
This has the same symptoms as Bug 948840 - System initialization fails to create root aggregate in certain configurations
https://mysupport.netapp.com/site/bugs-online/product/ONTAP/BURT/948840
But if you're really on ONTAP 9.7P6, it should have been fixed.
In any case, there is a workaround, but you will need to contact Technical Support by opening a case and they will take you through it.
I do recommend you do this... using 7TB x 2 for the root aggregates is a tremendous waste of space!
What @andris said on both accounts.
@andris @SpindleNinja I did have NetApp Release 9.7P6 install on the Nodes. so why I am getting that error I have no idea. Maybe if I add AFF to the node hard drive slot I have created the root from there. Also when I create aggregates I have to use Raid_TEC no chance of getting raid-dp since the disk is 10TB in size.
If you'd like to address the root aggregate issues, please open a case and reference the bug.
We allow RAID-DP for partitioned root aggregates with 8TB+ disks, but RAID-TEC is mandatory for data aggregates w/ 8TB+ disks.
@andris that is what I was saying my disk size is 10TB and uses Raid-TEC. that is why I could not use 9b at boot menu. so I use 9c instead and it took three disks. 1 for the root vol and 2 DP.
why i asked if it failed on the second controller as that's usually the case.
I'd reach out to support for the fix as you're on a fix code level of that bug.
It'll be worth it to gain the extra space back.
thanks