ONTAP Discussions
ONTAP Discussions
Hi, I have a Client who has old FAS8080's, running ONTAP 9.1.P16. (Out of support System)
After a DC move, the system failed to come up. I rebooted Node1, and during the reboot, Node2 took over the Node1 LIFS and aggregate, and started serving data, as it should. (This node is still serving data.)
After Node1 started, I tried to do a giveback. (It stated ready for giveback). The root aggregate failed back, BUT is in an unknown status on Node2. The DATA aggregate as well as the LIFS did not fail back (NAS ONLY Cluster). The commands available on Node1 are VERY limited, i.e. I cannot even do a "disk zerospares" (there are 3 spare disks on the system that needs zeroing) command. I tried various command from it, but all seems to fail, with "unknown" errors. The only LIF it see, is its own Node management.
I was thinking of remvong the node from the Cluster, and to rejoin it - however, I am unsure of what the impat will be on the data aggregate, owned by this node. Should I reassign all the resources to Node2? I cannot find any document explaining the steps needed to remove a node from a Cluster, without destroying all of the resources.
If needed, I can attach the logs from both controllers.
Regards
Did you sort this issue ?
Nope, not yet.
is the Clusternet connected OK? any errors? but avoid removing the node from the cluster.
Have you opened a case yet? probably the best bet.
Okay, BUT the system is OOW, OOS, EOL. (It is old 8040's).
Do you mind sharing screenshots of the current state of aggregates ? Is it the root_aggr which is in unknown state, is that correct ? Which shelf does it belong to ? Are the Shelfs all up & no errors ? Basically, can controller see the shelf (the one is unknown state)
Hi, I am closing this as there was no workable solution found. The Customer is busy migrating data to a new ONTAP 9 system.