ONTAP Discussions

Reconfigure aggregates 0 volume, controller 1 and 2

JuanCarlosrl
7,995 Views

Hi guys.


We have a cabin with Ontap 9.7P6, the storage has 12 disks  and shelf with 30 disk,  the configuration:

 

aggr0_controller1: disk type SHARED (1,3,5,7,9 + 11 HS) with ADP (partition root and data)

aggr0_controller2: disk type SHARED (0,2,4,6,10 + 12 HS) with ADP (partition root and data)


The remaining 30 disk are type SPARE, and no partitioned.

 

I want the aggr0 of both controllers to use fewer disks, instead of 6 disks, only 4 disks in RAID DP each controller, and the 4 disks that I recover, I want to use them for the other 30 type spare disks, so I will have 34 disks for make 2 aggr of 16 disk


I have migrated the root volumes of aggr0, being as follows:

agg0_controller1 --> disk 1,3,5,7 RAID DP

agg0_controller2 --> disk 0,2,4,8 RAID DP


but now disks 8,9,10 and 11, are still SHARED type and have root and data partitions, and when I want to create 2 aggregates for data with the other 30 disks it doesn't let me select them. I have tried to change the type from SHARED to SPARED of those 4 discs, but have not succeeded.


What I have highlighted is the following, I have restarted the controller 2, and I have entered the Control C menu, I have selected the ADP 9 option.
I have used option 9a to delete owner and partitions, then I have selected option 9b, since I want the aggr0 root volume to be a partitioned disk, but not the rest of the disks, since I want them without partitions to add different data .
Once this is done, I have restarted the controller, I have executed boot_ontap, but it no longer starts, it always restarts and automatically enters the Control C menu and does not exit from there.
It does not automatically install the ontap on those 4 disks that I want.

Then I have entered Control C in option 5 Maintenance, and I have put the owner the controller 2 of those 4 disks (0,2,4,6) I have rebooted, but it does not install.
And finally, I have returned to option 5 maintenance, and then to LOADER, and there I have executed boot_recovery, it seems that it wants to start the Ontap installation, but it reselects the disks that it wants (0,2,4 , 6,8, + 10 HS) and not the ones I want, but at the end it shows me an error: it tells me that it has not managed to create the root volume.


Currently I have controller 2 that if I restart it, it does not have an aggr root volume and it does not start, it stops in the Controller C menu.


What would be the procedure to install the ontap on controller 1 with the ADP disks (disk 1,3,5,7 RAID DP) controller2 disks ADP (disk 0,2,4,6 RAID DP) and then the rest of the disks from 8 to 42 are of type SPARE without having partitions to create 2 aggregates one for each controller and store the data?

 

Many thanks,

 

1 ACCEPTED SOLUTION

TMACMD
7,947 Views

This is 100% destructive: it will remove all data and configuration information!!!)

 

***see warning above***

Best way to fix this back 

  1. Log into support.netapp.com and collect your license keys. You will now need to re-add them
  2. Get both sytems to the LOADER-X> prompt
  3. Detatch any external drives!!!
  4. At the LOADER-X> prompt, on both controllers, type: boot_ontap menu
  5. They should both boot to the ONTAP boot menu.
  6.  Select option 9 on both controllers
  7. ON the First  NODE ONLY: select 9a (and let it finish, returning to the prompt!)
  8. On the second node ONLY: select 9a (and let it finish, returning to the prompt!)
  9. For verification on the First  NODE ONLY: select 9a (and let it finish, returning to the prompt!)
    1. It should indicate it found and cleared all 12 Disks
  10. ON the First  NODE ONLY: select 9b (and wait for the node to actually start the reboot)
  11. On the second node ONLY: select 9b (it will also reboot)
  12. They should now build out correct RAID-DP root aggrs (3D+2P+1S for each controller)
  13. You will need to setup ONTAP again. 
  14. Be sure to insert your license codes
  15. You will likely need to re-assign some of the Data Partitions so they are all on the same controller.
  16. You are free to split the drives on the external shelf how you see fit.

View solution in original post

12 REPLIES 12

SpindleNinja
7,982 Views

Can you specify type of disk (SATA / SAS) and controller model? 

But note that DP requires a min of 5 disks.   

JuanCarlosrl
7,972 Views

Fas 2720,  and  the disks are 10TB SATA.

 

In the other  storage fas 2720, but 4 disk SSD for storage pool, and other disk SATA 8TB,  
I think we have the 4 disk RAID DP type aggr0 for each controller, am I right?

 

Other storage:

 

JuanCarlosrl_0-1608297341309.png

 

TMACMD
7,964 Views

Realize that with ADP you are minimizing the amount of space the ROOT aggr uses. If you were to use 4 disks instead of 6 (which you cant) the root partition would just get bigger.

 

On the FAS2720 using the 10TB drives, you are in a special case. Normally for >6TB drives, you need 4+3 (7 drives) as a minimum. ONTAP will let the ROOT aggr build with 3D+2P+1S (as long as there are no external drives attached!). This is what you have. This will allow best capacity using 10TB drives. ADP on the FAS2720 with LARGE (>6TB) drives will NOT properly use ADP with *any* external attached drives! You must detach if you wish to use option 9a/9b again, but realize it will *STILL* end up with 3D+2D+1S for the root partitions.

 

You can then assign all DATA partitions to one node and make a single large AGGR (which will use RAID-TEC). You would have:

ROOT_01 = 168.3 GiB

ROOT_01 = 168.3 GiB

AGGR_01 = 63.59TiB

 

It would be useful to better describe your disk. What I can gather is this:

Internal 12 x 10TB

DS460C shelf with 4x960GB and 26x8TB

Is that right?

 

If it is then I would expect another larger AGGR of 141.36TiB on the 8TB drives leaving the SSD for a Storage Pool (to be used as Flash Pool)

 

I end up with 204.95TiB usable capacity

 

JuanCarlosrl
7,956 Views

We have a storage fas 2720 with a 12 disk 10TB and shelf DS460C  with a 30TB connected.


In my case with this disk configuration it is not possible to do a 4 disk dp raid for each controller?

 

If so, then the storage was well configured from the factory, I thought it could be done as with the fas2720 storage that I have with 8TB disks, it has 4 disks for a controller and another 4 disks for the controller 2 as I have attached the previous image .

 

If I can't do what I want, how can I now have controller 2 with the aggr0 root created?

I have launched option 9a and 9b in the Control C menu, and it has cleaned the disks, that its owner was controller 2, those of controller 1 are correct, and it still has its aggr0 volume.

After doing 9a and 9b, I have carried out a halt to restart the controller2 and I don't know how to proceed so that the software is automatically installed and the agg0 is created, it does nothing, when restarting it automatically enters Control C mode.
I have tried to enter option 5 maintenace mode, then enter LOADER and there launch the boot_recovery command, it seems that it tries to do the installation automatically on the disks that he wants (without me being able to choose which ones, or type) and then ends with a error, it says it failed to create the root volume.

 

Thanks for your help

TMACMD
7,948 Views

This is 100% destructive: it will remove all data and configuration information!!!)

 

***see warning above***

Best way to fix this back 

  1. Log into support.netapp.com and collect your license keys. You will now need to re-add them
  2. Get both sytems to the LOADER-X> prompt
  3. Detatch any external drives!!!
  4. At the LOADER-X> prompt, on both controllers, type: boot_ontap menu
  5. They should both boot to the ONTAP boot menu.
  6.  Select option 9 on both controllers
  7. ON the First  NODE ONLY: select 9a (and let it finish, returning to the prompt!)
  8. On the second node ONLY: select 9a (and let it finish, returning to the prompt!)
  9. For verification on the First  NODE ONLY: select 9a (and let it finish, returning to the prompt!)
    1. It should indicate it found and cleared all 12 Disks
  10. ON the First  NODE ONLY: select 9b (and wait for the node to actually start the reboot)
  11. On the second node ONLY: select 9b (it will also reboot)
  12. They should now build out correct RAID-DP root aggrs (3D+2P+1S for each controller)
  13. You will need to setup ONTAP again. 
  14. Be sure to insert your license codes
  15. You will likely need to re-assign some of the Data Partitions so they are all on the same controller.
  16. You are free to split the drives on the external shelf how you see fit.

JuanCarlosrl
7,941 Views

I have to reinstall the ontap for controller 2 to work properly, is that correct?

The good thing about this cabin is that it is not in production.

 

In step 11 and 12, what do I have to do? When option 9b is done, how do I install the ontap image? I restart the 2 controllers after doing step b, and the ontap is already installed directly, without doing anything? Do I have to select an option to install or is it automatic?


Do I need to have an ontap image and install from a netapp option? I am lost at this point

 

Other doubt:

If I want to install 5 disks in RAID DP, how do I make that configuration be like this? You indicated to me previously that it would remain (3D + 2P + 1S for each controller), is that automatic? Is there a possibility that they are 2D + 2P + 1S?

 

Thanks, 

JuanCarlosrl
7,931 Views

I see the document Netapp.

https://library.netapp.com/ecmdocs/ECMP1636022/html/GUID-DAEB7BDC-4E6E-4CEE-9B3D-F40962C3D428.html#:~:text=The%20minimum%20number%20of%20disks,%2Dpari...

 

The minimum number of disks in a RAID-DP group is three: at least one data disk, one regular parity disk, and one double-parity (dParity) disk. However, for non-root aggregates with only one RAID group, you must have at least 5 disks (three data disks and two parity disks).

 

How would I go about selecting 3 disks for installation for each node?

 

JuanCarlosrl
7,788 Views

Today I have tried the steps you have indicated, but I have only been able to do so in one way. Then I will do it as you indicate, first a controller 1 and then controller 2.


What I did was on controller 2, boot option 9, 9a, then 9b and it rebooted. When it restarts, I have been able to see that it automatically takes the disks 0,2,4,6,8,10 without me telling it anything, it sets them an assignment, it partitions them, but then it gives an error, the agg0 volume could not be created. Attached logs.

I need help, how can I continue?


The storage came from the factory with the 6 disks assigned to a controller and the other 6 to the controller 2, finally I want to leave it as initially.

 

Controller 1 I also want to leave it as it came from the factory, I have made a migration of the 6 disks that it had assigned (1,3,5,7,9,11) to a DP raid (1,3,5,7), ¿ How can I put it back as a start?

 

 

LOGS CONTROLLER 2:

 

AdpInit: Root will be created with 6 disks with configuration as (2d+3p+1s) using disks of type (FSAS).
bootarg.bootmenu.selection is |4a|
AdpInit: System will now perform initialization using option 4a
BOOTMGR: The system has 0 disks assigned whereas it needs 6 to boot, will try to assign the required number.
sanown_assign_X_disks: assign disks from my unowned local site pool0 loop
sanown_assign_disk_helper: Assigned disk 0b.00.10
Cannot do remote rescan. Use 'run local disk show' on the console of ?? for it to scan the newly assigned disks
Dec 21 08:01:13 [localhost:diskown.RescanMessageFailed:error]: Could not send rescan message to ??.
sanown_assign_disk_helper: Assigned disk 0b.00.4
sanown_assign_disk_helper: Assigned disk 0b.00.6
sanown_assign_disk_helper: Assigned disk 0b.00.0
sanown_assign_disk_helper: Assigned disk 0b.00.8
sanown_assign_disk_helper: Assigned disk 0b.00.2
BOOTMGR: already_assigned=0, min_to_boot=6, num_assigned=6


.Dec 21 08:01:22 [localhost:raid.disk.fast.zero.done:notice]: Disk 0b.00.4 Shelf 0 Bay 4 [NETAPP X380_HLBRE10TA07 NA01] S/N [1EJEJV9N] UID [5000CCA2:7E895C50:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0x5fe056520edce8a2).
Dec 21 08:01:22 [localhost:raid.disk.fast.zero.done:notice]: Disk 0b.00.0 Shelf 0 Bay 0 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGT3RNZ] UID [5000CCA2:A22BDBA4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0x5fe056524c5e5be6).
Dec 21 08:01:22 [localhost:raid.disk.fast.zero.done:notice]: Disk 0b.00.2 Shelf 0 Bay 2 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGS9M9Z] UID [5000CCA2:A22A62A0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0x5fe0565246e38035).
Dec 21 08:01:22 [localhost:raid.disk.fast.zero.done:notice]: Disk 0b.00.8 Shelf 0 Bay 8 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGT0AKZ] UID [5000CCA2:A22BA8BC:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0x5fe0565201f63bef).
Dec 21 08:01:22 [localhost:raid.disk.fast.zero.done:notice]: Disk 0b.00.10 Shelf 0 Bay 10 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGSLEWZ] UID [5000CCA2:A22AE724:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0x5fe056522053eb86).
Dec 21 08:01:22 [localhost:raid.disk.fast.zero.done:notice]: Disk 0b.00.6 Shelf 0 Bay 6 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGSJH6Z] UID [5000CCA2:A22AC9C0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000] : disk zeroing complete (0x5fe05652658bbafe).
Dec 21 08:01:25 [localhost:raid.autoPart.start:notice]: System has started auto-partitioning 6 disks.
Dec 21 08:01:26 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0b.00.0 Shelf 0 Bay 0 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGT3RNZ] UID [5000CCA2:A22BDBA4:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], partitions created 2, partition sizes specified 1, partition spec summary [2]=16347584.
.Dec 21 08:01:28 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0b.00.2 Shelf 0 Bay 2 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGS9M9Z] UID [5000CCA2:A22A62A0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], partitions created 2, partition sizes specified 1, partition spec summary [2]=16347584.
Dec 21 08:01:29 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0b.00.4 Shelf 0 Bay 4 [NETAPP X380_HLBRE10TA07 NA01] S/N [1EJEJV9N] UID [5000CCA2:7E895C50:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], partitions created 2, partition sizes specified 1, partition spec summary [2]=16347584.
Dec 21 08:01:31 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0b.00.6 Shelf 0 Bay 6 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGSJH6Z] UID [5000CCA2:A22AC9C0:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], partitions created 2, partition sizes specified 1, partition spec summary [2]=16347584.
.Dec 21 08:01:33 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0b.00.8 Shelf 0 Bay 8 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGT0AKZ] UID [5000CCA2:A22BA8BC:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], partitions created 2, partition sizes specified 1, partition spec summary [2]=16347584.
Dec 21 08:01:34 [localhost:raid.partition.disk:notice]: Disk partition successful on Disk 0b.00.10 Shelf 0 Bay 10 [NETAPP X380_HLBRE10TA07 NA01] S/N [4DGSLEWZ] UID [5000CCA2:A22AE724:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000], partitions created 2, partition sizes specified 1, partition spec summary [2]=16347584.
Dec 21 08:01:35 [localhost:raid.autoPart.done:notice]: Successfully auto-partitioned 6 of 6 disks.
Unable to create root aggregate: 5 disks specified, but at least 7 disks are required for raid_tec

Root aggregate creation failed.
Aborting Initialization.
Rebooting... (press ctrl-c during boot to reattempt initialization)
Uptime: 1m58s
System rebooting...
BIOS Version: 11.7
Portions Copyright (C) 2014-2018 NetApp, Inc. All Rights Reserved.

Initializing System Memory ...

 

 

TMACMD
7,743 Views

That is not what I said to do in the post. You DO NOT DO one node (9, then 9a, then 9b) and then the other node(9, 9a, then 9b). They are done somewhat together. (Do 9 on both, wait for menu on both, do 9a on first node. wait for it to finish, do 9a on second node, wait for it to finish, do 9b on first node, wait for it to reboot, do 9b on second node)

 

Re-read the post and follow the directions as indicated.

JuanCarlosrl
7,732 Views

Thank you very much, I have done what you tell me and have installed correctly on both controllers, and the disk configuration has been (3D + 2P + 1S) RAID DP.


My question is whether it is possible to perform a RAID DP (1D + 2P) on each controller because I have not seen the possibility of choosing the disks, since the installation does it automatically and it does not give me a choice.


In other storage, I have seen that the factory is configured in such a way.

TMACMD
7,649 Views

You are lacking in your detail. For any help to be produced, you need to supply more details about your config.

Do you have a local NetApp Partner or NetApp SE to work with?

You probably need to end up removing ownership of all the "data" partitions and assigning them to one node. Then decide if you want to split the remaining drives between nodes.

For example, from the cli:

storage disk option modify -node * -autoassign off

disk show -fields data-owner

(for each drive listed for node-02)

  • disk removeowner -data true -disk <disk_id>
  • disk assign -data true -disk <disk_id> -owner <node_01>

Then decide if you want to split the remaining 30 drives:

  • disk assign -count 15 -owner <node_01>
  • disk assign -count 15 -owner <node_02>

Then create your aggregates

  • aggr auto-provision -nodes node-01,node-02 -verbose true
  • This will tell you exactly what it is trying to do. if you do not like it, you can always use the CLI to create the aggrs

JuanCarlosrl
7,639 Views

 

The storage is now up and working.

 

My question is how the factory configured storage has come,

  • Controller 1 (3D + 2P + 1S)  total 6 disc for agg0
  • Controller 2 (3D + 2P + 1S)  total 6 disc for agg0

 It was to know if that, it could be undone, and make a new installation, but that only dedicates to controller 1 an aggregate of 3 disks for agg0 and another aggregate for controller 2 with agg0, all of type RAID DP.

It is knowledge for future installations

 

 

Public