ONTAP Hardware
ONTAP Hardware
hi,
first, i am new to netapp, this is my second implementation, so forgive me for mess-up
Setup environment :
FAS 2020 with 2 controlers, 12 1TB disks
I have configured active-active cluster, everything was nice, but since we had 2 controllers, 2 defaut aggregates, customer was a bit disspointed in total storage space (4 disks lost to raid dp, 4 more for hotspare, so only 4 or maximum 6 disks remain for data).
I tried to do unassign disks from filer2 to filer1, and i succeded, and ofc i also succed in erasing root volume for filler2 . Now filler 2 cannot boot (no root volume)
Only hope is that one disk is in "unowned state" and that disk (0c.00.02) was last disk on filler2.
Is there any way to restore root volume on filer2? Can i use this disk (0c.00.02) somehow? Or at least, can i restore factory defults? Customer is satisfied with storage space, but i am not since they now dont have activ-active, and i am sure problems will occur.
Thanx in advance
Ok, problem solved
I just unassigned 2 disks from 1st filer (so 3 disks unowned total), consoled to filer 2 and booted (ctrl-c) to special options, and from nvram reinstalled data ontap
=================================================
Please choose one of the following:
(1) Normal boot.
(2) Boot without /etc/rc.
(3) Change password.
(4) Initialize owned disk (1 disk is owned by this filer).
(4a) Same as option 4, but create a flexible root volume.
(5) Maintenance mode boot.
Selection (1-5)? 4a
The system has 1 disks assigned whereas it needs 3 to boot, will try to assign the required number.
Fri Mar 4 13:33:13 GMT [diskown.changingOwner:info]: changing ownership for disk 0c.00.11 (S/N 9QJ8QL5S) from unowned (ID -1) to (ID 135115575)
Fri Mar 4 13:33:14 GMT [diskown.changingOwner:info]: changing ownership for disk 0c.00.10 (S/N 9QJ8QL3T) from unowned (ID -1) to (ID 135115575)
DBG: SANOWN: total number of disks assigned = 2Zero disks and install a new filesystem? y
This will erase all the data on the disks, are you sure?
Please answer yes or no.
This will erase all the data on the disks, are you sure? y
Zeroing disks takes about 280 minutes.
=================================================
By default, the 4a option will create a RAID-DP aggregate so will use 2 drives for parity, 1 for data, and 1 for spare. In a small environment like this where the second controller is just an active-standby, you may want to downgrade to RAID4. This will remove one of the parity drives and you can then reassign it to the first controller:
fas2> aggr options raidtype raid4 new_aggr
fas2> disk assign -s unowned <disk ID>
fas1> disk assign <disk ID>
fas1> disk zero spares (or add to an aggregate)
Thak you Micheal.
In fact this is what i have done over few last days : i have assigned 6 disks to each controller, and configured raid 4 and 1 hot spare, so now i have 2 aggregates with 4 data disks. each aggregate defined over one filer.
Acive-active, and everyone is happy.
But i was calculating a little, (9 disks on first filer, 3 on other and some more combinatons), and as far as i can think, 12x1 TB sata disks are best for 1 controller (in fact i think that 12 sata disks and active-active are more for marketing purposes than real production)
Is there any way to make some configuration where i will have active/standby filers, lets say all 12 disks are asssigned to first filer, second is here only to take over if first is unavailable?