ONTAP Discussions

Volumes were found that must be changed to online state before attempting NDU

tmferg
6,507 Views

Attempting a NDU from 8.3.1 to 9.1 and in the validation phase the subject error is reported. 

 

volume show -state !online -state !restricted reports 1 volume with a name of vol0 on my second node.

 

I found this KB: Volumes were found that must be changed to online state before attempting NDU - NetApp Knowledge Base

 

But it just states to contact support. Does anyone know how to correct this error? Our support ran out and I don't know how long it's going to take for it to be reinstated after the purchase has been approved. 

1 ACCEPTED SOLUTION

tmferg
6,006 Views

Just wanted to let you know this was resolved by deleting the stale vol0 entry. 

View solution in original post

16 REPLIES 16

SpindleNinja
6,498 Views

Volume do have to be online for upgrades, (or deleted).    

The question though is why it's vol0.    How many vol0s do you have? 

tmferg
6,497 Views

So if I just do a vol show  node 1 shows a vol0 with an aggregate/size/available/used% populated, node 2 also shows a vol0 but no agregate/size/available/used data populated. 

 

I have tried to bring the vol0 for node 2 online through volume modify -vserver "node 2" -volume vol0 -state online and it reports aggregate not found.

SpindleNinja
6,495 Views

Is the cluster healthy and not in takeover or anything? 

tmferg
6,493 Views

Yes healthy and not in takeover. No issues reported in the upgrade validation besides this one offline volume.

SpindleNinja
6,491 Views

if you drop into node shell on 02,  does vol0 look OK/healthy? 

tmferg
6,490 Views

I'm not sure if I did it right or not, but doing system node run -node "node 2" then vol status, vol0 does not appear. My other volumes all look good in there. 

SpindleNinja
6,483 Views

is there any volumes that show as root? under options? 

example - 

WOPR-02> vol status
         Volume State           Status                Options
           vol0 online          raid_dp, flex         root, nvfail=on, space_slo=none
                                64-bit

 

 

tmferg
6,482 Views

AUTOROOT  is showing as root. On node 1 I have vol0 as root, nvfail=0n

SpindleNinja
6,476 Views

Do you know if someone ever attempted to move to a new root aggr on this system?  

tmferg
6,475 Views

No I don't know. Nobody currently working for the company was here when this system was last touched. 

SpindleNinja
6,474 Views

Got ya. 

 

AUTOROOT is usually a remnant of node recovery or a root aggr move.  

 

Let me see what else I can dig up that might help.   But a support case would be best here.  

 

tmferg
6,473 Views

Gotcha. I appreciate your help on this!

SpindleNinja
6,205 Views

Following up on this, unfortunately, i wasn’t able to find anything that could be posted.

 

Few other questions thought - 

 

Is this part of a migration or something? i.e.  having to upgrade this system to migrate to a new ONTAP system?

 

Whats the output from the following - 

 

set d; debug vreport show

 

 

tmferg
6,092 Views

Not migrating, just upgrading the firmware on our cluster to keep it current.

Output from the set d; debug vreport show:

 


aggregate Differences:

Name Reason Attributes
-------- ------- ---------------------------------------------------
ROOT_node_02 Present both in VLDB and WAFL with differences
Node Name: node-02
Aggregate UUID: e6e37fff-1793-49a7-984d-13674db15be6
Aggregate State: online
Aggregate Raid Status: raid_dp
Differing Attribute: Volume Count (Use commands 'volume add-other-volume' and 'volume remove-other-volume' to fix 7-Mode volumes on this aggregate)
WAFL Value: 1
VLDB Value: 0

tmferg
6,007 Views

Just wanted to let you know this was resolved by deleting the stale vol0 entry. 

SpindleNinja
5,995 Views

Good to hear, the vreport is a handy command.   Sorry for the delay in replying, I was out of the country last week.  

Public