ONTAP Discussions
ONTAP Discussions
Attempting a NDU from 8.3.1 to 9.1 and in the validation phase the subject error is reported.
volume show -state !online -state !restricted reports 1 volume with a name of vol0 on my second node.
I found this KB: Volumes were found that must be changed to online state before attempting NDU - NetApp Knowledge Base
But it just states to contact support. Does anyone know how to correct this error? Our support ran out and I don't know how long it's going to take for it to be reinstated after the purchase has been approved.
Solved! See The Solution
Just wanted to let you know this was resolved by deleting the stale vol0 entry.
Volume do have to be online for upgrades, (or deleted).
The question though is why it's vol0. How many vol0s do you have?
So if I just do a vol show node 1 shows a vol0 with an aggregate/size/available/used% populated, node 2 also shows a vol0 but no agregate/size/available/used data populated.
I have tried to bring the vol0 for node 2 online through volume modify -vserver "node 2" -volume vol0 -state online and it reports aggregate not found.
Is the cluster healthy and not in takeover or anything?
Yes healthy and not in takeover. No issues reported in the upgrade validation besides this one offline volume.
if you drop into node shell on 02, does vol0 look OK/healthy?
I'm not sure if I did it right or not, but doing system node run -node "node 2" then vol status, vol0 does not appear. My other volumes all look good in there.
is there any volumes that show as root? under options?
example -
WOPR-02> vol status
Volume State Status Options
vol0 online raid_dp, flex root, nvfail=on, space_slo=none
64-bit
AUTOROOT is showing as root. On node 1 I have vol0 as root, nvfail=0n
Do you know if someone ever attempted to move to a new root aggr on this system?
No I don't know. Nobody currently working for the company was here when this system was last touched.
Got ya.
AUTOROOT is usually a remnant of node recovery or a root aggr move.
Let me see what else I can dig up that might help. But a support case would be best here.
Gotcha. I appreciate your help on this!
Following up on this, unfortunately, i wasn’t able to find anything that could be posted.
Few other questions thought -
Is this part of a migration or something? i.e. having to upgrade this system to migrate to a new ONTAP system?
Whats the output from the following -
set d; debug vreport show
Not migrating, just upgrading the firmware on our cluster to keep it current.
Output from the set d; debug vreport show:
aggregate Differences:
Name Reason Attributes
-------- ------- ---------------------------------------------------
ROOT_node_02 Present both in VLDB and WAFL with differences
Node Name: node-02
Aggregate UUID: e6e37fff-1793-49a7-984d-13674db15be6
Aggregate State: online
Aggregate Raid Status: raid_dp
Differing Attribute: Volume Count (Use commands 'volume add-other-volume' and 'volume remove-other-volume' to fix 7-Mode volumes on this aggregate)
WAFL Value: 1
VLDB Value: 0
Just wanted to let you know this was resolved by deleting the stale vol0 entry.
Good to hear, the vreport is a handy command. Sorry for the delay in replying, I was out of the country last week.