<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Disk in ONTAP Hardware</title>
    <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158885#M10312</link>
    <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/12128"&gt;@andris&lt;/a&gt;&amp;nbsp;that is what I was saying my disk size is 10TB and uses Raid-TEC. that is why I could not use 9b at boot menu. so I use 9c instead and it took three disks. 1 for the root vol and 2 DP.&lt;/P&gt;</description>
    <pubDate>Fri, 21 Aug 2020 17:52:43 GMT</pubDate>
    <dc:creator>Riaad</dc:creator>
    <dc:date>2020-08-21T17:52:43Z</dc:date>
    <item>
      <title>Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158806#M10291</link>
      <description>&lt;P&gt;Hi,&lt;/P&gt;
&lt;P&gt;I just finish installing Ontap 9.7P6 on FAS2740. What I notice is that there is an aggr that I can't seem to delete or modify. It does not show up when I type aggr show (Only the root aggr) but I can see it when I type&amp;nbsp;storage disk show -container-name *aggr name*&lt;/P&gt;
&lt;P&gt;It's bugging me so much because it's using 13 TB disk space raid-tec.&lt;/P&gt;
&lt;P&gt;Please help.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;See pictures.&lt;/P&gt;</description>
      <pubDate>Wed, 04 Jun 2025 10:56:30 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158806#M10291</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2025-06-04T10:56:30Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158817#M10292</link>
      <description>&lt;P&gt;Did you intentionally deploy this without ADP? &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Also, &amp;nbsp; were any of the disks moved from another controller? &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;What's the output of the following:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;node run -node * aggr status&amp;nbsp;&lt;/P&gt;
&lt;P&gt;set d; debug vreport show&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 14:54:23 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158817#M10292</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-20T14:54:23Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158820#M10293</link>
      <description>&lt;P&gt;Hi there,&lt;/P&gt;
&lt;P&gt;I am sorry I am new to NetApp I am not sure about ADP. the print out for&lt;/P&gt;
&lt;P&gt;node run -node * aggr status&lt;BR /&gt;2 entries were acted on.&lt;/P&gt;
&lt;P&gt;Node: SeClus_1&lt;BR /&gt;Aggr State Status Options&lt;BR /&gt;aggr0_SeClus_1 online raid_dp, aggr root, nosnap=on&lt;BR /&gt;64-bit&lt;BR /&gt;SeClus01_1_NL_SAS_1 failed raid_tec, aggr raidsize=14&lt;BR /&gt;partial&lt;BR /&gt;64-bit&lt;/P&gt;
&lt;P&gt;Node: SeClus_2&lt;BR /&gt;Aggr State Status Options&lt;BR /&gt;aggr0_SeClus_2 online raid_dp, aggr root, nosnap=on&lt;BR /&gt;64-bit&lt;BR /&gt;SeClus01_1_NL_SAS_1 failed raid_tec, aggr raidsize=14&lt;BR /&gt;partial&lt;BR /&gt;64-bit&lt;/P&gt;
&lt;P&gt;***************************&lt;/P&gt;
&lt;P&gt;set d; debug vreport show&lt;/P&gt;
&lt;P&gt;Warning: These diagnostic commands are for use by NetApp personnel only.&lt;BR /&gt;Do you want to continue? {y|n}: y&lt;/P&gt;
&lt;P&gt;aggregate Differences:&lt;/P&gt;
&lt;P&gt;Name Reason Attributes&lt;BR /&gt;-------- ------- ---------------------------------------------------&lt;BR /&gt;SeClus01_1_NL_SAS_1(649dc2bd-381a-48f6-a830-5688dc2bec50)&lt;BR /&gt;Duplicate aggregates present in WAFL Only&lt;BR /&gt;Node Name: SeClus_1&lt;BR /&gt;Aggregate UUID: 649dc2bd-381a-48f6-a830-5688dc2bec50&lt;BR /&gt;Aggregate State: failed&lt;BR /&gt;Aggregate Raid Status: raid_tec, partial&lt;BR /&gt;Aggregate HA Policy: sfo&lt;BR /&gt;Is Aggregate Root: false&lt;BR /&gt;Is Composite Aggregate: false&lt;/P&gt;
&lt;P&gt;Duplicate Aggregate Info:&lt;BR /&gt;Node Name: SeClus_2&lt;BR /&gt;Aggregate UUID: 649dc2bd-381a-48f6-a830-5688dc2bec50&lt;BR /&gt;*Aggregate Name: SeClus01_1_NL_SAS_1&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;****************************************&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 15:18:18 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158820#M10293</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-20T15:18:18Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158821#M10294</link>
      <description>&lt;P&gt;The hard drive is 10TB and uses raid-tec by default.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 15:19:46 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158821#M10294</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-20T15:19:46Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158822#M10295</link>
      <description>&lt;P&gt;ADP - Advanced Drive Partitioning. &amp;nbsp; &amp;nbsp;It will partition the drives up so you'll have a smaller part for root and a larger part for Data. &amp;nbsp; &amp;nbsp;55TB is a lot to waste for root agars. &amp;nbsp; &amp;nbsp;&amp;nbsp;&lt;A href="https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-B745CFA8-2C4C-47F1-A984-B95D3EBCAAB4.html" target="_blank"&gt;https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-concepts/GUID-B745CFA8-2C4C-47F1-A984-B95D3EBCAAB4.html&lt;/A&gt;&amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Right now it looks like your root aggrs are made up up 3x8TB drives. &amp;nbsp;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;Can you post the output of "storage disk show" &amp;nbsp;and "storage disk show -partition-ownership"&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 15:35:20 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158822#M10295</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-20T15:35:20Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158825#M10296</link>
      <description>&lt;P&gt;storage disk show&lt;BR /&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.11.0 8.89TB 11 0 FSAS aggregate aggr0_SeClus_2&lt;BR /&gt;SeClus_2&lt;BR /&gt;1.11.1 8.89TB 11 1 FSAS aggregate aggr0_SeClus_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.2 8.89TB 11 2 FSAS aggregate aggr0_SeClus_2&lt;BR /&gt;SeClus_2&lt;BR /&gt;1.11.3 8.89TB 11 3 FSAS aggregate aggr0_SeClus_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.4 8.89TB 11 4 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.5 8.89TB 11 5 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.6 8.89TB 11 6 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.7 8.89TB 11 7 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.8 8.89TB 11 8 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.9 8.89TB 11 9 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.10 8.89TB 11 10 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.11 8.89TB 11 11 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.12 8.89TB 11 12 FSAS spare Pool0 SeClus_1&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.11.13 8.89TB 11 13 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.14 8.89TB 11 14 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.15 8.89TB 11 15 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.16 8.89TB 11 16 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.17 8.89TB 11 17 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.18 8.89TB 11 18 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.19 8.89TB 11 19 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.20 8.89TB 11 20 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.21 8.89TB 11 21 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.22 8.89TB 11 22 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.23 8.89TB 11 23 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.24 8.89TB 11 24 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.25 8.89TB 11 25 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.11.26 8.89TB 11 26 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.27 8.89TB 11 27 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.28 8.89TB 11 28 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.29 8.89TB 11 29 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.30 8.89TB 11 30 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.31 8.89TB 11 31 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.32 8.89TB 11 32 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.33 8.89TB 11 33 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.11.34 8.89TB 11 34 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.35 8.89TB 11 35 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.36 8.89TB 11 36 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.37 8.89TB 11 37 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.38 8.89TB 11 38 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.39 8.89TB 11 39 FSAS spare Pool0 SeClus_1&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.11.40 8.89TB 11 40 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.41 8.89TB 11 41 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.42 8.89TB 11 42 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.43 8.89TB 11 43 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.44 8.89TB 11 44 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.45 8.89TB 11 45 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.46 8.89TB 11 46 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.47 8.89TB 11 47 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.48 8.89TB 11 48 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.49 8.89TB 11 49 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.50 8.89TB 11 50 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.51 8.89TB 11 51 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.52 8.89TB 11 52 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.53 8.89TB 11 53 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.54 8.89TB 11 54 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.55 8.89TB 11 55 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.56 8.89TB 11 56 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.57 8.89TB 11 57 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.11.58 8.89TB 11 58 FSAS spare Pool0 SeClus_1&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.11.59 8.89TB 11 59 FSAS spare Pool0 SeClus_1&lt;BR /&gt;1.22.0 8.89TB 22 0 FSAS aggregate aggr0_SeClus_2&lt;BR /&gt;SeClus_2&lt;BR /&gt;1.22.1 8.89TB 22 1 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.2 8.89TB 22 2 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.3 8.89TB 22 3 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.4 8.89TB 22 4 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.5 8.89TB 22 5 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.6 8.89TB 22 6 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.7 8.89TB 22 7 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.8 8.89TB 22 8 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.9 8.89TB 22 9 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.10 8.89TB 22 10 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.11 8.89TB 22 11 FSAS aggregate SeClus01_1_NL_SAS_1&lt;BR /&gt;SeClus_2&lt;BR /&gt;1.22.12 8.89TB 22 12 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.13 8.89TB 22 13 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.14 8.89TB 22 14 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.15 8.89TB 22 15 FSAS spare Pool0 SeClus_2&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.22.16 8.89TB 22 16 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.17 8.89TB 22 17 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.18 8.89TB 22 18 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.19 8.89TB 22 19 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.20 8.89TB 22 20 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.21 8.89TB 22 21 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.22 8.89TB 22 22 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.23 8.89TB 22 23 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.24 8.89TB 22 24 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.25 8.89TB 22 25 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.26 8.89TB 22 26 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.27 8.89TB 22 27 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.28 8.89TB 22 28 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.29 8.89TB 22 29 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.30 8.89TB 22 30 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.31 8.89TB 22 31 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.32 8.89TB 22 32 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.33 8.89TB 22 33 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.34 8.89TB 22 34 FSAS spare Pool0 SeClus_2&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.22.35 8.89TB 22 35 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.36 8.89TB 22 36 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.37 8.89TB 22 37 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.38 8.89TB 22 38 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.39 8.89TB 22 39 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.40 8.89TB 22 40 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.41 8.89TB 22 41 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.42 8.89TB 22 42 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.43 8.89TB 22 43 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.44 8.89TB 22 44 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.45 8.89TB 22 45 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.46 8.89TB 22 46 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.47 8.89TB 22 47 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.48 8.89TB 22 48 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.49 8.89TB 22 49 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.50 8.89TB 22 50 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.51 8.89TB 22 51 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.52 8.89TB 22 52 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.53 8.89TB 22 53 FSAS spare Pool0 SeClus_2&lt;/P&gt;
&lt;P&gt;Usable Disk Container Container&lt;BR /&gt;Disk Size Shelf Bay Type Type Name Owner&lt;BR /&gt;---------------- ---------- ----- --- ------- ----------- --------- --------&lt;BR /&gt;1.22.54 8.89TB 22 54 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.55 8.89TB 22 55 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.56 8.89TB 22 56 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.57 8.89TB 22 57 FSAS aggregate aggr0_SeClus_1&lt;BR /&gt;SeClus_1&lt;BR /&gt;1.22.58 8.89TB 22 58 FSAS spare Pool0 SeClus_2&lt;BR /&gt;1.22.59 8.89TB 22 59 FSAS spare Pool0 SeClus_1&lt;BR /&gt;120 entries were displayed.&lt;/P&gt;
&lt;P&gt;***************************************************************&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;storage disk show -partition-ownership&lt;BR /&gt;Disk Partition Home Owner Home ID Owner ID&lt;BR /&gt;-------- --------- ----------------- ----------------- ----------- -----------&lt;BR /&gt;1.11.0 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.11.1 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.2 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.11.3 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.4 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.5 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.6 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.7 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.8 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.9 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.10 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.11 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.12 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.13 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.14 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.15 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.16 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.17 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.18 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.19 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.20 Container SeClus_1 SeClus_1 538133895 538133895&lt;/P&gt;
&lt;P&gt;Disk Partition Home Owner Home ID Owner ID&lt;BR /&gt;-------- --------- ----------------- ----------------- ----------- -----------&lt;BR /&gt;1.11.21 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.22 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.23 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.24 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.25 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.26 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.27 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.28 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.29 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.30 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.31 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.32 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.33 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.34 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.35 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.36 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.37 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.38 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.39 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.40 Container SeClus_1 SeClus_1 538133895 538133895&lt;/P&gt;
&lt;P&gt;Disk Partition Home Owner Home ID Owner ID&lt;BR /&gt;-------- --------- ----------------- ----------------- ----------- -----------&lt;BR /&gt;1.11.41 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.42 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.43 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.44 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.45 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.46 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.47 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.48 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.49 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.50 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.51 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.52 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.53 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.54 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.55 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.56 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.57 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.58 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.11.59 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.22.0 Container SeClus_2 SeClus_2 538134051 538134051&lt;/P&gt;
&lt;P&gt;Disk Partition Home Owner Home ID Owner ID&lt;BR /&gt;-------- --------- ----------------- ----------------- ----------- -----------&lt;BR /&gt;1.22.1 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.2 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.3 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.4 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.5 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.6 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.7 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.8 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.9 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.10 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.11 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.12 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.13 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.14 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.15 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.16 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.17 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.18 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.19 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.20 Container SeClus_2 SeClus_2 538134051 538134051&lt;/P&gt;
&lt;P&gt;Disk Partition Home Owner Home ID Owner ID&lt;BR /&gt;-------- --------- ----------------- ----------------- ----------- -----------&lt;BR /&gt;1.22.21 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.22 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.23 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.24 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.25 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.26 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.27 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.28 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.29 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.30 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.31 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.32 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.33 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.34 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.35 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.36 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.37 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.38 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.39 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.40 Container SeClus_2 SeClus_2 538134051 538134051&lt;/P&gt;
&lt;P&gt;Disk Partition Home Owner Home ID Owner ID&lt;BR /&gt;-------- --------- ----------------- ----------------- ----------- -----------&lt;BR /&gt;1.22.41 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.42 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.43 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.44 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.45 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.46 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.47 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.48 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.49 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.50 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.51 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.52 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.53 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.54 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.55 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.56 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.57 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;1.22.58 Container SeClus_2 SeClus_2 538134051 538134051&lt;BR /&gt;1.22.59 Container SeClus_1 SeClus_1 538133895 538133895&lt;BR /&gt;120 entries were displayed.&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 15:58:22 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158825#M10296</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-20T15:58:22Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158828#M10297</link>
      <description>&lt;P&gt;I did use ADP.&lt;/P&gt;
&lt;P&gt;&lt;BR /&gt;SeClus01_1_NL_SAS_1&lt;BR /&gt;0B 0B 0% failed 0 SeClus_1 raid_tec,&lt;BR /&gt;partial&lt;BR /&gt;aggr0_SeClus_1&lt;BR /&gt;7.60TB 377.5GB 95% online 1 SeClus_1 raid_dp,&lt;BR /&gt;normal&lt;BR /&gt;aggr0_SeClus_2&lt;BR /&gt;7.60TB 377.5GB 95% online 1 SeClus_2 raid_dp,&lt;BR /&gt;normal&lt;BR /&gt;3 entries were displayed.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;aggr0_SeClus_1 and aggr0_SeClus_2 is the root and consists of 3 disks 2 for parity.&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 16:02:30 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158828#M10297</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-20T16:02:30Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158829#M10298</link>
      <description>&lt;P&gt;it's not. &amp;nbsp;the output from -partition-ownership tells me that. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;here's from a system with ADP:&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;WOPR::&amp;gt; storage disk show -partition-ownership
Disk Partition Home Owner Home ID Owner ID
-------- --------- ----------------- ----------------- ----------- -----------
Info: This cluster has partitioned disks. To get a complete list of spare disk
capacity use "storage aggregate show-spare-disks".
1.0.0 Container WOPR-02 WOPR-02 1111111111 1111111111
Root WOPR-02 WOPR-02 1111111111 1111111111
Data WOPR-02 WOPR-02 1111111111 1111111111
1.0.1 Container WOPR-01 WOPR-01 2222222222 2222222222
Root WOPR-01 WOPR-01 2222222222 2222222222
Data WOPR-01 WOPR-01 2222222222 2222222222&lt;/LI-CODE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;disks would also show up as "shared"&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;LI-CODE lang="markup"&gt;OPR::&amp;gt; disk show
                     Usable           Disk    Container   Container
Disk                   Size Shelf Bay Type    Type        Name      Owner
---------------- ---------- ----- --- ------- ----------- --------- --------

Info: This cluster has partitioned disks. To get a complete list of spare disk
      capacity use "storage aggregate show-spare-disks".
1.0.0               836.9GB     0   0 SAS     shared      N2_aggr1, root_aggr0_N2 WOPR-02
1.0.1               836.9GB     0   1 SAS     shared      N1_aggr1, root_aggr0_N1 WOPR-01
1.0.2               836.9GB     0   2 SAS     shared      N2_aggr1, root_aggr0_N2 WOPR-02
1.0.3               836.9GB     0   3 SAS     shared      N1_aggr1, root_aggr0_N1 WOPR-01
1.0.4               836.9GB     0   4 SAS     shared      N2_aggr1, root_aggr0_N2 WOPR-02
1.0.5               836.9GB     0   5 SAS     shared      N1_aggr1, root_aggr0_N1 WOPR-01
1.0.6               836.9GB     0   6 SAS     shared      N2_aggr1, root_aggr0_N2 WOPR-02
1.0.7               836.9GB     0   7 SAS     shared      N1_aggr1, root_aggr0_N1 WOPR-01
1.0.8               836.9GB     0   8 SAS     shared      N2_aggr1, root_aggr0_N2 WOPR-02
1.0.9               836.9GB     0   9 SAS     shared      N1_aggr1, root_aggr0_N1 WOPR-01
1.0.10              836.9GB     0  10 SAS     shared      N2_aggr1  WOPR-02
1.0.11              836.9GB     0  11 SAS     shared      N1_aggr1  WOPR-01
1.0.12              836.9GB     0  12 SAS     shared      N2_aggr1  WOPR-02
1.0.13              836.9GB     0  13 SAS     shared      -         WOPR-01
1.0.14              836.9GB     0  14 SAS     shared      -         WOPR-02
1.0.15              836.9GB     0  15 SAS     shared      N1_aggr1  WOPR-01&lt;/LI-CODE&gt;
&lt;P&gt;Are you able to do a re-init? &amp;nbsp; You would get a lot of space back and it would clear out that screwy aggr too. &amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 16:09:02 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158829#M10298</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-20T16:09:02Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158830#M10299</link>
      <description>&lt;P&gt;yes, I can but I tried that before and it keeps coming back. is that any specific way to do it.&amp;nbsp; should i go with option 9a-c&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 16:12:48 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158830#M10299</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-20T16:12:48Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158831#M10300</link>
      <description>&lt;P&gt;9a/9b &amp;nbsp; &amp;nbsp; &amp;nbsp;- &amp;nbsp;c is whole, &amp;nbsp;how you have it currently. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;&lt;A href="https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-B86616AE-D345-4B44-AA56-BBC7ABD44068.html" target="_blank"&gt;https://docs.netapp.com/ontap-9/topic/com.netapp.doc.dot-cm-sag/GUID-B86616AE-D345-4B44-AA56-BBC7ABD44068.html&lt;/A&gt;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;do this -&amp;gt; &amp;nbsp;&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;opt 9a on controller 1 &amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;...wait for it to finish&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;opt 9a on&amp;nbsp;controller 2&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;...wait for it to finish&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;opt 9b on controller 1&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;...wait for it to finish&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;opt 9b on&amp;nbsp;controller 2&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;&lt;SPAN&gt;configure the cluster like you normally would.&amp;nbsp;&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 16:20:24 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158831#M10300</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-20T16:20:24Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158832#M10301</link>
      <description>&lt;P&gt;Thanks will give it a try and let you know.&lt;/P&gt;</description>
      <pubDate>Thu, 20 Aug 2020 16:42:07 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158832#M10301</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-20T16:42:07Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158870#M10305</link>
      <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/67570"&gt;@SpindleNinja&lt;/a&gt;&amp;nbsp;So I tried it and all is well. one thing I notice that caused the issue is that I tried to do ADP before and I used option9b which created raid -TEC because the disk is 6TB and above. when that failed I did not notice. However, when I was following the steps you said I notice that so I performed 9a and 9c then go through the normal setup. all works great thus far. thanks for your help.&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 13:54:58 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158870#M10305</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-21T13:54:58Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158871#M10306</link>
      <description>&lt;P&gt;Failed on the second controller? &amp;nbsp; &amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;9c still give you whole drives. &amp;nbsp;which eats a lot of space for the roots. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 14:04:47 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158871#M10306</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-21T14:04:47Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158873#M10307</link>
      <description>&lt;P&gt;Nothing failed.&lt;/P&gt;
&lt;P&gt;I know that, but for the root, It can't use option 9b because one single disk is over 6TB and when the disk is that large 9b does not work.&amp;nbsp; it gives an error cant create a root vol because&amp;nbsp; 5 of&amp;nbsp; 7 disks are available. when I select 9c it creates three disk raid DP.&lt;BR /&gt;&lt;BR /&gt;&lt;BR /&gt;aggr0_Seclus_1_0&lt;BR /&gt;7.60TB 377.5GB 95% online 1 Seclus_1 raid_dp,&lt;BR /&gt;normal&lt;BR /&gt;aggr0_Seclus_2_0&lt;BR /&gt;7.60TB 377.5GB 95% online 1 Seclus_2 raid_dp,&lt;BR /&gt;normal&lt;BR /&gt;&lt;BR /&gt;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 14:58:45 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158873#M10307</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-21T14:58:45Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158875#M10308</link>
      <description>&lt;P&gt;Did the message looks like this?&lt;/P&gt;
&lt;PRE&gt;&lt;SPAN class="luci-table__cell--align-left line-breaks-formatting ng-star-inserted"&gt;Unable to create root aggregate: 5 disks specified, but at least 7 disks are required for raid_tec&lt;/SPAN&gt;&lt;/PRE&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;This has the same symptoms as Bug 948840 - &lt;SPAN class="luci-table__cell--align-left line-breaks-formatting ng-star-inserted"&gt;System initialization fails to create root aggregate in certain configurations&lt;/SPAN&gt;&lt;/P&gt;
&lt;P&gt;&lt;A href="https://mysupport.netapp.com/site/bugs-online/product/ONTAP/BURT/948840" target="_blank"&gt;https://mysupport.netapp.com/site/bugs-online/product/ONTAP/BURT/948840&lt;/A&gt;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;But if you're really on ONTAP 9.7P6, it should have been fixed.&lt;/P&gt;
&lt;P&gt;In any case, there is a workaround, but you will need to contact Technical Support by opening a case and they will take you through it.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I do recommend you do this... using 7TB x 2 for the root aggregates is a tremendous waste of space!&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 16:09:49 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158875#M10308</guid>
      <dc:creator>andris</dc:creator>
      <dc:date>2020-08-21T16:09:49Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158876#M10309</link>
      <description>&lt;P&gt;What&amp;nbsp;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/12128"&gt;@andris&lt;/a&gt;&amp;nbsp; said on both accounts.&amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 16:13:26 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158876#M10309</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-21T16:13:26Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158880#M10310</link>
      <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/12128"&gt;@andris&lt;/a&gt;&amp;nbsp;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/67570"&gt;@SpindleNinja&lt;/a&gt;&amp;nbsp;I&amp;nbsp;did have&amp;nbsp;NetApp Release 9.7P6 install on the Nodes. so why I am getting that error I have no idea. Maybe if I add AFF to the node hard drive slot I have created the root from there. Also when I create aggregates I have to use Raid_TEC no chance of getting raid-dp since the disk is 10TB in size.&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 16:52:27 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158880#M10310</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-21T16:52:27Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158884#M10311</link>
      <description>&lt;P&gt;If you'd like to address the root aggregate issues, please open a case and reference the bug.&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;We allow RAID-DP for partitioned root aggregates with&amp;nbsp; 8TB+ disks, but RAID-TEC is &lt;STRONG&gt;mandatory&lt;/STRONG&gt; for data aggregates w/ 8TB+ disks.&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 17:48:03 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158884#M10311</guid>
      <dc:creator>andris</dc:creator>
      <dc:date>2020-08-21T17:48:03Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158885#M10312</link>
      <description>&lt;P&gt;&lt;a href="https://community.netapp.com/t5/user/viewprofilepage/user-id/12128"&gt;@andris&lt;/a&gt;&amp;nbsp;that is what I was saying my disk size is 10TB and uses Raid-TEC. that is why I could not use 9b at boot menu. so I use 9c instead and it took three disks. 1 for the root vol and 2 DP.&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 17:52:43 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158885#M10312</guid>
      <dc:creator>Riaad</dc:creator>
      <dc:date>2020-08-21T17:52:43Z</dc:date>
    </item>
    <item>
      <title>Re: Disk</title>
      <link>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158886#M10313</link>
      <description>&lt;P&gt;why i asked if it failed on the second controller as that's usually the case. &amp;nbsp; &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;I'd reach out to support for the fix as you're on a fix code level of that bug. &amp;nbsp;&lt;/P&gt;
&lt;P&gt;&amp;nbsp;&lt;/P&gt;
&lt;P&gt;It'll be worth it to gain the extra space back. &amp;nbsp;&lt;/P&gt;</description>
      <pubDate>Fri, 21 Aug 2020 17:55:33 GMT</pubDate>
      <guid>https://community.netapp.com/t5/ONTAP-Hardware/Disk/m-p/158886#M10313</guid>
      <dc:creator>SpindleNinja</dc:creator>
      <dc:date>2020-08-21T17:55:33Z</dc:date>
    </item>
  </channel>
</rss>

