We have a FAS8200 that came with ONTAP 9.3 (yes, it sat for a while before being configured). Ran the setup wizard but did not set it up as a 2 node cluster for failover and wanted to re-run as if it were new. We booted the top controller, selecting CTRL-C for the boot menu and then selected Option 4:
(4) Clean configuration and initialize all disks.
Once that finished, the system will try and boot but multiple HD errors scroll across the screen stating that this system cannot access the drives. We tried again, selection option 4a in an attempt to rebuild the system LUN but it states "must have at least 3 drives" and indicates 0 drives available.
*Important: before selecting option 4, the initial setup wizard ran fine and completed. We then ran the same wizard on the bottom controller which also ran successfully. But now only the bottom of the two controllers seem to work correctly.
What can we do to get this top controller back? The end intention is to use these as a 2 node pair (HA). We also plan on updating the software to ONTAP 9.7. But that does not seem possible with the first controller in this state?
Not sure I totally understand what you mean by "Ran the setup wizard but did not set it up as a 2 node cluster for failover and wanted to re-run as if it were new", but it sounds like your drive shelves may not be cabled correctly. The setup poster is available at https://library.netapp.com/ecm/ecm_download_file/ECMLP2316769 and is a good start to review cabling.
If that is ok, I would recommend to go back to the boot menu, choose option 9b and let it start again.
You may get pushback if calling support for assistance in setup - this is something we encourage customers to use our professional services (or a partner's) for if they are unfamiliar.
We double checked the cabling but indeed, using option 9a then 9b eventually solved this problem. I had to do so on controller A and B. What had happened is that I tried to create the 2 node cluster on node B but node A did not have connectivity at that time. I believe what happened is that the pair got into a "headless" state during initial config and two controller ID's with the same cluster name confused things.
9a and 9b are now working, and once controller B is good, then I'll repeat on A and we should be back to square one.
If not, yes it will be time to initiate a support case. Thank you for the help!
Turns out something is going very wrong. Even after successfully running 9a and 9b from the good node, the first node still cannot see drives. All 120 show with "failed" on node 1, but on node 2 they all show up fine.
Opened a case and am working with NetApp at this point.