ONTAP Discussions

How to create Root aggregate across external shelf during initialization

YUGO-S
24,371 Views

Hi everyone,
First time posting here.

On a new FAS2720(9.5P5) and DS212C, I would like to configure ADP using a total of 24 NL-SAS disks.
So, I actually executed the steps 9a and 9b from the boot menu, but a Kernel Panic occurred after executing step 9b.
When a Kernel Panic occured, the following message was displayed.

------------------------------------------------------------------------------------------------------------------------------------
Jun 27 17:48:57 [localhost: raid.autoPart.start: notice]: System has started auto-partitioning 6 disks.
Jun 27 17:48:58 [localhost: raid.autoPart.done: notice]: Successfully auto-partitioned 6 of 6 disks.
Unable to create root aggregate: 5 disks specified, but at least 7 disks are required for raid_tec
------------------------------------------------------------------------------------------------------------------------------------

According to HWU, FAS2720 can configure ADP with 24 disks.
Is this due to a bug in the product? Or is there any problem with the procedure?

As a side note, I think that it is a problem that occurs only when an external shelf is connected,
because there is no problem in performing the same procedure without connecting DS212C.

 

Could you please help?
Many thanks

23 REPLIES 23

TMACMD
23,017 Views

Do this

First boot both nodes to the maintenance menu

Second choose option 9 on both and wait for them to be the same

Third perform 9a on one node and wait for it to bring the prompt back

Fourth perform 9a on the second node and wait for the prompt back

On the second node you should see all 24 disks listed. If you don't then you will need to go into maintenance mode (option 5) and remove all ownerships that do not belong to either node. Then redo the first four steps

 

After you see 24 disks you could repeat 9a on both nodes again to be extra sure

 

Then on one node only choose option 9b. Wait for it to reboot. Ideally wait for the partitioning to start. Then do option 9b on the second node.

 

Hopu this helps

YUGO-S
23,001 Views
Thank you for your reply.
I already tried the same procedure but it didn't work.

The kernel panic can be avoided by executing the following command from maintenance mode, but the Root aggregate has been created only for the internal disk.
 
-----------------------------------------------------------------------------
setenv bootarg.raid.allow-raid-tec? false
-----------------------------------------------------------------------------

Is there any way to create a Root aggregate across the disks in the external shelf at initialization time?

SpindleNinja
22,990 Views

What size disks do you have? 

 

YUGO-S
22,987 Views

I have 10TB NL-SAS disks.

SpindleNinja
22,977 Views

Also, did this shelf ship with the 2720? 

 

TMACMD
22,904 Views

Ok, so at PC now. some questions & comments

 

 

1. As mike indicated, was the FAS and the shelf Shipped together? -> it should have shipped with ADP by default unless the shelf was an "add-on" and not "configured" indicated by a "-C" in the part number on the quote. At least then it would/should be all spares (although on rare occasions, I have had new drives show up with an odd owner!)

 

2. Try these steps to see if it helps, I presume you are using ONTAP 9.5P5 based on the original message

a. Get both systems to the LOADER> prompt.

b. run "set-defaults" to clear anything that is/was set beyond normal. Type "saveenv"

c. on both controllers from loader> "boot_ontap menu" -> takes you to the boot menu

d. on one node Choose Option 5 -> maintenance mode

e. run "sysconfig -av". Look near the top and make sure that Multipath-HA is configured!! If Not, correct the cabling and retry!

f. once back at the prompt, run "disk show -a" -> show all disks, including partitions and ownerships.

g. if you see more than two SystemIds, then there is an issue. You will need to run commands to properly remove ownership of "foreign" drives. If you see only two System IDs, then "halt"

h. boot node back to menu -> boot_ontap menu

i. from the menu, choose option 9 on both controllers. Wait for boht nodes to be at the Optoin 9 submenu

j. On controller A/node 1, chooe option 9a. -> this will find all disk it owns and any unowned disk, remove all partitions and then remove ownerships (this will not modify disks owned by any other node)

k. Wait for this to finish before doing the other controller!

l. On the other controller, choose option 9a. -> this should now find *all* disks, assign itself owner, remove all partitions and then ten remove ownership. It should list out all 24 disks. If not, then it is likely disk are ownder by another node not in the ha pair.

m. On node 1, do option 9b. Wait for the node to reboot and until you see it partitioning drives

n. on node 2, do option 9b. When it finishes, you should be OK

 

It would be very helpful if you capture all this in a text file and post it back as an attachment in case you hit issues.

YUGO-S
22,001 Views

TMAC CTG,

Sorry for late reply.

 

I tried it as you taught, but still it did not work well.
I will attach a log when I perform the initialization, can you give me some advice?

TMACMD
22,037 Views

Well, that was enlightening. Thanks for the logs.

 

I think your best bet right now is to actually open a case. I suspect you are hitting a bug as those directions should have worked as expected.

 

I'm still looking around, but since I'm on vacation I'm not looking to hard until I get back to a PC many hours from now.

 

Please post what becomes of this.

 

There is likely another special environment variable needing to be set (which in my opinion should not have to be as a system default). The HWU clearly indicates what you are doing should work AND it should be using RAID-DP for the root aggregates only.

 

That is why I believe what you're seeing is a bug.

 

Please let us know!

TMACMD
22,033 Views

Specifically, this line from the node 1 log file

 

AdpInit: Root will be created with 6 disks with configuration as (2d+3p+1s) using disks of type (FSAS).

That should be Raid-dp for root 

YUGO-S
22,708 Views

SpindleNinja,

 

I carried out the procedure you taught, but the problem was not resolved.
Incidentally, the shelf was shipped with FAS2720.

SpindleNinja
22,720 Views

Worth a shot.   I think it's more related to what TMAC found in the logs.   I would open a case.   

 

HWU is showing RAID DP for the 2720 w/10TB drives  for the root aggrs, not TEC. 

TMACMD
22,691 Views

 

First, PLEASE be sure to open a support ticket. I believe this to be a BUG. It needs to be tracked and fixed.

 

Ok, You  *might* be able to fix it this way:

 

1. Get BOTH nodes to the Boot Menu

-> LOADER> boot_ontap menu

2. Wait for the menu

3. Choose Option 9 on both controllers (remove all partitions and ownerships)

4. Perform Option 9a on one controller. Once it finishes, perform on second node.

5. Now, instead of doing option 9b, Choose option 9e (return to main menu) on both nodes

6. Boot into MAINTENANCE mode (Option 5) on both nodes

7. Manually assign disks 12 to each node

-> First, run the command on each node: "disk show" and record the system id (node 1 should be 538048277 and node 2 should be 538046447.

-> From ONE controller, assign all disks

-> disk assign -n 12 -s 538048277

-> disk assign -n 12 -s 538046447

-> if these commands do not work, you will probably need to manually assign 12 disks to each node.

-> you can either do disks 0-5 on both shelves to node 1 and 6-11 on both shelves to node 2

-> or you can assign all 12 disks on shelf 0 to node 1 and all 12 disks on shelf 1 to node 2

-> Look at ownership on BOTH controllers now

-> disk show (perform on both)

-> all 24 disks should be owned but not partitioned (yet).

8. Halt both controllers.

-> halt (on both)

9. Boot both controller back to the boot menu

-> boot_ontap menu

10. From the Boot Menu on NODE 1, choose option 4. Verify prompts and let it reboot

-> WATCH the messages!!!

-> Hopefully, the disks will partition automatically.
11. If you see that they do, go ahead and start node 2 with option 4

 

Please record the entire session. (dumpt to logs again! Thanks!)

If this does not partition automatically, we might be able to guide you through manual partitions.

I'd rather not and hope the auto-partition works.

 

Please let us know

 

PS-> I still think you are hitting a BUG. This platform, according to HWU should use RAID-DP (and not RAID-TEC) for creation of the root aggregate. If this process works, it will likely still use RAID-TEC, which is still OK. You are not really loosing anything!

 

YUGO-S
22,569 Views

Sorry for late reply.

After opening the case, I received an answer from NetApp, so I'll share it.

 

The conclusion is that the information written on the HWU is incorrect, and the FAS2720 can not be configured with ADP using 24 disks.

In addition, NetApp said that they asked the department in charge to correct the contents of the HWU.

 

Eventually, I created an aggregate using 24 disks according to the following procedure.

 

- Perform FAS2720 initialization without connecting an external shelf
(ADP was configured with 6 disks per node(3data,2parity,1spare))
- Shut down the FAS2720 once, connect an external shelf and start it again
- Create data aggregates using only P1 of FAS 2720
- Add disks other than spare disks in the external shelf to the created aggregate
(The disks in the external shelf have been partitioned.)
- As result, An aggregate was created using P1 of FAS2720 and all disks except spares in the external shelf
(The root aggregate was not expanded to P2 of the disk in the external shelf, and it became a spare disk status)


Thank for very much for your support.

TMACMD
22,563 Views

Thanks for the update

aborzenkov
17,631 Views

@YUGO-S wrote:

The conclusion is that the information written on the HWU is incorrect, and the FAS2720 can not be configured with ADP using 24 disks.

Well, then it has changed recently:

 

ADP still limited to internal disks on FAS26XX with Ontap 9.2?

andris
17,603 Views

To recap:

  1. Entry-level FAS systems (with HDDs) will only perform ADP on the internal/embedded drives.
    • AFF systems will use ADP on up to 48 SSDs across multiple shelves. However, for AFF-A220/A220, you will typically see ADP on the 24 internal SSDs only because NetApp Mfg. initializes the system without any add-on storage.
  2. 12-drive entry-level FAS systems (FAS2520, 2620, 2720) with HDD sizes that require RAID-TEC need to be explicitly set RAID-TEC to "false" for internal drive ADP (i.e. LOADER> setenv bootarg.raid.allow-raid-tec? false) to ensure that ADP will use 3d+2p+1s P1 partitions for each node. 
    Important: Once you have initialized the system, set this bootarg back to true so that any additional large-HDD RG's use TEC for optimal protection.
  3. As you noted, if you expand a RG with partitions, spare whole disks will be automatically partitioned. That said, you would avoid the wasted P1 partition "tax" if you simply created a new RG with your external shelf drives as whole disks.

SpindleNinja
17,589 Views

hwu needs to be corrected then.   It looks like it’s showing adp for 24 drives on the 2720.  

aborzenkov
17,547 Views

@SpindleNinja wrote:

hwu needs to be corrected then.   It looks like it’s showing adp for 24 drives on the 2720.  


Which is correct as long as there are no internal drives. Moreover, HWU seems to be smart enough to not even show 12 drives version for drives that do not fit into controller enclosure.

 

What would be useful is comment whether shown configuration applies to internal or external drives.

SpindleNinja
17,134 Views

Maybe “appended” then? 

Claas
16,927 Views

Hallo Zusammen,

 

bin heute in den gleichen Fehler reingelaufen. FAS2720 mit 8 TB SATA  Disks. Habe entsprechend den 24 Disk ansatz verfolgt. Hat das Problem gelöst. Die neuen Root Aggregate sind aber dennoch mit 5 Platten Raid_DP angelegt. Ich denke auch an einen Bug. Habe Konsolen-Logs mitgeschnitten. Werde sobald ich etwas Zeit habe einen Case aufmachen dafür. War mit 9.5P8 also aktuelles Release.

 

Werde Updaten, wenn ich etwas neues habe.

 

MfG Claas

Public