ONTAP Discussions

Rebuilt root aggr

xGeox
4,847 Views

Hello Team,

 

I have a FAS220 with an external SAS shelf with 24 disks DS2246, this is a single controller, we need to add it and rebuilt the root aggr to have 2 root aggr.

Something that we're thinking is performing option 4 on the boot menu, however, the SAS shelf won't be clean we will be taking out the shelf, then after we finished with the clean process we pretend to put it back again the existing shelf to the new HA and hopefully, everything will be working as expected.

What do you think guys?

1 ACCEPTED SOLUTION

xGeox
4,548 Views

All,

These are the steps that I made;

I create a new vol that will be a flag to be the new root vol.

Validate the current root volume (usually vol0) and the new root volume:

vol status

Ensure that the ndmpd daemon is turned on using the ndmpd on command on the filer.

Copy the entire /etc directory from the current root volume to the new root volume:

>ndmpcopy /etc /vol/new_rootvol/etc

Check the status of the NDMP copy with the command >ndmpd status [please note it will take several minutes due to the amount of data it has]

once is done, flag the new root volume as the root volume:

vol options new_rootvol root

Rename the old and new root volume. The following command renames the volume :

vol rename

Reboot the filer

Verify the new root volume settings:

vol status

Once the new vol is the new root volume that should be already migrated into a data aggr [You must set the flag on the data aggr as root in the Maintenance mode ONLY]

Halt the node again to set the data aggr where it has the new root vol as root, then reach into a maintenance mode with the following you will set the data aggr as root by the next boot.

Check in the maintenance mode the status of the aggr with the commands;

*>aggr status

Once you check the aggr is available to change proceed with the next command;

*>aggr options [aggrname] root

Check again if the data aggr is tag as root;

*>aggr status

then you can destroy the original root aggr to reassign the disks with the new controller.

 

NOTE [This procedure is only when you don't have enough spares to initialize the second root aggr of the controller]

 

 

View solution in original post

5 REPLIES 5

GidonMarcus
4,837 Views

Hi. I'm not familiar with a model with the exact name "FAS220", so not sure if 7-mode or Cdot.

For 7-mode that should work. For Cdot it will not work - see: https://community.netapp.com/t5/ONTAP-Discussions/migrating-data-from-old-to-new-cluster/td-p/158623

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

xGeox
4,816 Views

The platform is FAS2220, and what I trying to do is add a second controller, but all the data will be move to the SAS shelf then we're going to destroy root aggr, basically option 4, then with the internal chassis recreate 2 root aggr with 2 root volumes in order to have an HA.

We can add it right now because we don't have enough disk available to create an aggr, we want to separate half of them to a controller and the other half to the other controller, then when everything cabled working as expected we pretend to add the external shelf [that shelf is not clean it has all data], plugin and hopefully, NetApp recognizes and continue sharing the data in it.

My concern is this procedure is valid and I want to verify if the sysid that the external disks have will the same one and won't be any issue or misunderstanding between Netapp controllers and the external shelf.

xGeox
4,815 Views

The current version is 7-mode 8.2.4P4

 

NW-NetApp> sysconfig
NetApp Release 8.2.4P4 7-Mode: Thu Jun 23 14:51:17 PDT 2016
System ID: 1889442466 (NW-NetApp); partner ID: 0000000000 ()
System Serial Number: 600000214746 (NW-NetApp)
System Rev: D0
System Storage Configuration: Mixed-Path HA
System ACP Connectivity: Partial Connectivity
Backplane Part Number: DS212
Backplane Rev:
Backplane Serial Number: 4591222279
slot 0: System Board
Model Name: FAS2220
Processors: 4
Processor type: Intel(R) Xeon(R) CPU C3528 @ 1.73GHz
Memory Size: 6144 MB
Memory Attributes: Hoisting
Normal ECC
Controller: A
Service Processor Status: Online
slot 0: Internal 10/100 Ethernet Controller
e0M MAC Address: 00:a0:98:36:5c:d3 (auto-100tx-fd-up)
e0P MAC Address: 00:a0:98:36:5c:d2 (auto-100tx-fd-up)
slot 0: Quad Gigabit Ethernet Controller 82580
e0a MAC Address: 00:a0:98:36:5c:ce (auto-1000t-fd-up)
e0b MAC Address: 00:a0:98:36:5c:cf (auto-1000t-fd-up)
e0c MAC Address: 00:a0:98:36:5c:d0 (auto-1000t-fd-up)
e0d MAC Address: 00:a0:98:36:5c:d1 (auto-unknown-down)
slot 0: Interconnect HBA: Mellanox IB MT25204
slot 0: SAS Host Adapter 0a
36 Disks: 23907.6GB
1 shelf with IOM6, 1 shelf with IOM6E
slot 0: SAS Host Adapter 0b
24 Disks: 13737.0GB
1 shelf with IOM6
slot 0: Intel USB EHCI Adapter u0a (0xdf101000)
boot0 Micron Technology 0x655, class 0/0, rev 2.00/11.10, addr 2 1936MB 512B/sect (4BF0022700202885)

GidonMarcus
4,745 Views

Hi.

 

The disconnected AGGR should be able to get recognized again when connected. In 7-Mode the metadata only exists on the disks themselves and not on the root/boot partitions.

 

The only concern I will have is about licensing. Do you have 8.2+ and HA licence for both controllers? ONTAP 8.2 licence only work for the controller s/n it has been issued to, and only with licences specifically generated for ONTAP 8.2+.

If these systems been sold with earlier version, and you don't have the 28 chars long licence keys for both controllers - DO NOT PROCEED with the option 4 plan.

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

xGeox
4,549 Views

All,

These are the steps that I made;

I create a new vol that will be a flag to be the new root vol.

Validate the current root volume (usually vol0) and the new root volume:

vol status

Ensure that the ndmpd daemon is turned on using the ndmpd on command on the filer.

Copy the entire /etc directory from the current root volume to the new root volume:

>ndmpcopy /etc /vol/new_rootvol/etc

Check the status of the NDMP copy with the command >ndmpd status [please note it will take several minutes due to the amount of data it has]

once is done, flag the new root volume as the root volume:

vol options new_rootvol root

Rename the old and new root volume. The following command renames the volume :

vol rename

Reboot the filer

Verify the new root volume settings:

vol status

Once the new vol is the new root volume that should be already migrated into a data aggr [You must set the flag on the data aggr as root in the Maintenance mode ONLY]

Halt the node again to set the data aggr where it has the new root vol as root, then reach into a maintenance mode with the following you will set the data aggr as root by the next boot.

Check in the maintenance mode the status of the aggr with the commands;

*>aggr status

Once you check the aggr is available to change proceed with the next command;

*>aggr options [aggrname] root

Check again if the data aggr is tag as root;

*>aggr status

then you can destroy the original root aggr to reassign the disks with the new controller.

 

NOTE [This procedure is only when you don't have enough spares to initialize the second root aggr of the controller]

 

 

Public