ONTAP Discussions

Volumes were found that must be changed to online state before attempting NDU

tmferg
4,320 Views

Attempting a NDU from 8.3.1 to 9.1 and in the validation phase the subject error is reported. 

 

volume show -state !online -state !restricted reports 1 volume with a name of vol0 on my second node.

 

I found this KB: Volumes were found that must be changed to online state before attempting NDU - NetApp Knowledge Base

 

But it just states to contact support. Does anyone know how to correct this error? Our support ran out and I don't know how long it's going to take for it to be reinstated after the purchase has been approved. 

1 ACCEPTED SOLUTION

tmferg
3,819 Views

Just wanted to let you know this was resolved by deleting the stale vol0 entry. 

View solution in original post

16 REPLIES 16

SpindleNinja
4,313 Views

Volume do have to be online for upgrades, (or deleted).    

The question though is why it's vol0.    How many vol0s do you have? 

tmferg
4,312 Views

So if I just do a vol show  node 1 shows a vol0 with an aggregate/size/available/used% populated, node 2 also shows a vol0 but no agregate/size/available/used data populated. 

 

I have tried to bring the vol0 for node 2 online through volume modify -vserver "node 2" -volume vol0 -state online and it reports aggregate not found.

SpindleNinja
4,310 Views

Is the cluster healthy and not in takeover or anything? 

tmferg
4,308 Views

Yes healthy and not in takeover. No issues reported in the upgrade validation besides this one offline volume.

SpindleNinja
4,306 Views

if you drop into node shell on 02,  does vol0 look OK/healthy? 

tmferg
4,305 Views

I'm not sure if I did it right or not, but doing system node run -node "node 2" then vol status, vol0 does not appear. My other volumes all look good in there. 

SpindleNinja
4,298 Views

is there any volumes that show as root? under options? 

example - 

WOPR-02> vol status
         Volume State           Status                Options
           vol0 online          raid_dp, flex         root, nvfail=on, space_slo=none
                                64-bit

 

 

tmferg
4,297 Views

AUTOROOT  is showing as root. On node 1 I have vol0 as root, nvfail=0n

SpindleNinja
4,291 Views

Do you know if someone ever attempted to move to a new root aggr on this system?  

tmferg
4,290 Views

No I don't know. Nobody currently working for the company was here when this system was last touched. 

SpindleNinja
4,289 Views

Got ya. 

 

AUTOROOT is usually a remnant of node recovery or a root aggr move.  

 

Let me see what else I can dig up that might help.   But a support case would be best here.  

 

tmferg
4,288 Views

Gotcha. I appreciate your help on this!

SpindleNinja
4,020 Views

Following up on this, unfortunately, i wasn’t able to find anything that could be posted.

 

Few other questions thought - 

 

Is this part of a migration or something? i.e.  having to upgrade this system to migrate to a new ONTAP system?

 

Whats the output from the following - 

 

set d; debug vreport show

 

 

tmferg
3,907 Views

Not migrating, just upgrading the firmware on our cluster to keep it current.

Output from the set d; debug vreport show:

 


aggregate Differences:

Name Reason Attributes
-------- ------- ---------------------------------------------------
ROOT_node_02 Present both in VLDB and WAFL with differences
Node Name: node-02
Aggregate UUID: e6e37fff-1793-49a7-984d-13674db15be6
Aggregate State: online
Aggregate Raid Status: raid_dp
Differing Attribute: Volume Count (Use commands 'volume add-other-volume' and 'volume remove-other-volume' to fix 7-Mode volumes on this aggregate)
WAFL Value: 1
VLDB Value: 0

tmferg
3,820 Views

Just wanted to let you know this was resolved by deleting the stale vol0 entry. 

SpindleNinja
3,810 Views

Good to hear, the vreport is a handy command.   Sorry for the delay in replying, I was out of the country last week.  

Public