ONTAP Discussions

Fas2040 Broken Disk

neil2048
2,522 Views

I recently aquired a FAS2040 Dual controller, i have just about managed to get it all configured, with one exception,   I want all the storage to be on one controller, with the other purely as failover only, currently only have the main unit with 12 disks.

 

When setting up the secondary controller it took 3 disks for the root volume, where i only wanted it to have 2 (RAID4), i reconfigured the aggr0 (root vol) for raid 4 and it released a disk from the aggr, but left it owned by the secondary controller.  This is where i think i did the wrong thing.  i used the disk remove command trying to release the ownership on the spare disk, so i can allocate it to the primary controller, except it marked the disk as broken.  I know forwell the disk isnt broken as it was fine prior to the disk remove command.  How can i revive this disk so i can reuse it?

 

I have tried reseating it and it came back in the same state.

 

thanks

 

1 ACCEPTED SOLUTION

neil2048
2,438 Views

Thanks, i managed to sort it by dropping he owning controller into a maintenance boot, and using the disk remove_ownership command.

 

Its all running as i wanted now, 9 Drives on the main controller, 1 spare and the secondary controller (failover only) running on 2 drives in raid 4 config.  Ive completed some test failovers and reboots and all seems good.

 

Thanks for all the helpful commnets and suggestions.. 🙂

 

 

View solution in original post

4 REPLIES 4

TMACMD
2,515 Views

This has been asked before either here or reddit.

 

rot any root aggregate, you must have 2 parity and one data. No way around it. 

neil2048
2,506 Views

Hi

 

Thanks but it did let me remove the disk from the aggr, also my query is how to reenable the broken disk.

 

regards

 

pedro_rocha
2,443 Views

You could use something like disk unfail. But this could led to no benefit as the disk would fail right after again

neil2048
2,439 Views

Thanks, i managed to sort it by dropping he owning controller into a maintenance boot, and using the disk remove_ownership command.

 

Its all running as i wanted now, 9 Drives on the main controller, 1 spare and the secondary controller (failover only) running on 2 drives in raid 4 config.  Ive completed some test failovers and reboots and all seems good.

 

Thanks for all the helpful commnets and suggestions.. 🙂

 

 

Public