Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Volumes were found that must be changed to online state before attempting NDU
2022-06-27
07:48 AM
6,989 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Attempting a NDU from 8.3.1 to 9.1 and in the validation phase the subject error is reported.
volume show -state !online -state !restricted reports 1 volume with a name of vol0 on my second node.
I found this KB: Volumes were found that must be changed to online state before attempting NDU - NetApp Knowledge Base
But it just states to contact support. Does anyone know how to correct this error? Our support ran out and I don't know how long it's going to take for it to be reinstated after the purchase has been approved.
Solved! See The Solution
1 ACCEPTED SOLUTION
Mjizzini has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just wanted to let you know this was resolved by deleting the stale vol0 entry.
16 REPLIES 16
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Volume do have to be online for upgrades, (or deleted).
The question though is why it's vol0. How many vol0s do you have?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So if I just do a vol show node 1 shows a vol0 with an aggregate/size/available/used% populated, node 2 also shows a vol0 but no agregate/size/available/used data populated.
I have tried to bring the vol0 for node 2 online through volume modify -vserver "node 2" -volume vol0 -state online and it reports aggregate not found.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is the cluster healthy and not in takeover or anything?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes healthy and not in takeover. No issues reported in the upgrade validation besides this one offline volume.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
if you drop into node shell on 02, does vol0 look OK/healthy?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'm not sure if I did it right or not, but doing system node run -node "node 2" then vol status, vol0 does not appear. My other volumes all look good in there.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
is there any volumes that show as root? under options?
example -
WOPR-02> vol status
Volume State Status Options
vol0 online raid_dp, flex root, nvfail=on, space_slo=none
64-bit
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
AUTOROOT is showing as root. On node 1 I have vol0 as root, nvfail=0n
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Do you know if someone ever attempted to move to a new root aggr on this system?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
No I don't know. Nobody currently working for the company was here when this system was last touched.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Got ya.
AUTOROOT is usually a remnant of node recovery or a root aggr move.
Let me see what else I can dig up that might help. But a support case would be best here.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Gotcha. I appreciate your help on this!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Following up on this, unfortunately, i wasn’t able to find anything that could be posted.
Few other questions thought -
Is this part of a migration or something? i.e. having to upgrade this system to migrate to a new ONTAP system?
Whats the output from the following -
set d; debug vreport show
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Not migrating, just upgrading the firmware on our cluster to keep it current.
Output from the set d; debug vreport show:
aggregate Differences:
Name Reason Attributes
-------- ------- ---------------------------------------------------
ROOT_node_02 Present both in VLDB and WAFL with differences
Node Name: node-02
Aggregate UUID: e6e37fff-1793-49a7-984d-13674db15be6
Aggregate State: online
Aggregate Raid Status: raid_dp
Differing Attribute: Volume Count (Use commands 'volume add-other-volume' and 'volume remove-other-volume' to fix 7-Mode volumes on this aggregate)
WAFL Value: 1
VLDB Value: 0
Mjizzini has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Just wanted to let you know this was resolved by deleting the stale vol0 entry.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Good to hear, the vreport is a handy command. Sorry for the delay in replying, I was out of the country last week.
