Simulator Discussions

Root volume is damaged

bashirgas
55,790 Views

Hi,

 

I have two node cluster (vsim 8.3) running on VMware Fusion. Doing some exercise I changed the MTU on my cluster ports using network port broadcas.... after few minutes one of the nodes went down and when it came back again, the following message appeared.

 

***********************
** SYSTEM MESSAGES **
***********************

The contents of the root volume may have changed and the local management
configuration may be inconsistent and/or the local management databases may be
out of sync with the replicated databases. This node is not fully operational.
Contact support personnel for the root volume recovery procedures

 

After a while the other node also showed same message.

 

I can not see my root volume in clustershell and I cannot use any of the subcommands of vol/volume....

 

Anyone who had a similar problem with VSIM 8.3? 

How can I recover my root volumes or repair it in clustershell?

 

If I enter a nodeshell, I have no option creating additional volumes using the vol command...

 

I'm stuck..

 

Thank you

Bash

1 ACCEPTED SOLUTION

SeanHatfield
55,541 Views

OK.  Need to try to revive the RDBs.  Shut down the non-epsilon node, halt the epsilon node and boot to the loader.  unset the boot_recovery bootarg and see if it will come back up.

 

unsetenv bootarg.init.boot_recovery

If the epsilon node comes back up, make sure the cluster ports are mtu1500, then try to bring up the other node.

 

 

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

View solution in original post

25 REPLIES 25

SeanHatfield
9,495 Views

Thats the response you get when you try to unset a variable that has not been set.  Maybe something changed, or maybe you are hitting a different scenario.  Whats the message you get after a normal boot?

 

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

Greg_Wilson
8,870 Views

this option is no longer there in 9.5

 

any ideas how to recover cluster from a power outage

christian_ruppert
8,408 Views

unsetenv bootarg.rdb_corrupt

haopengl
6,853 Views

1.  Bring the node to the LODER prompt:

::*>halt -node <node>

2.  Check to see if the following bootargs have been set:

LOADER>printenv bootarg.init.boot_recovery
LOADER>printenv bootarg.rdb_corrupt

3.  IF either bootarg as been set to a value, un-set it and boot ONTAP

LOADER>unsetenv bootarg.init.boot_recovery
LOADER>unsetenv bootarg.rdb_corrupt
LOADER>bye

SeanHatfield
6,836 Views

The simloader has no saveenv, so when you bye none of the changes will be saved.  Instead you have to boot, and let it proceed to at least the boot menu for changes from the loader prompt to be persistent.

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.
Public