ONTAP Hardware

Bad raid label version :(

Alfs29
13,235 Views

I'm in the same old boat now. 😕

I have FAS2020 running 7.3.7p3 ... "learning to fly" (yes, i know that is is old, slow, doesn't come with all bells and whistles, etc .... )

Got used DS14MK4 shelf with drives from ghetto admin shop a.k.a. ebay.

But when i try to "add" those disks to my system i get bad raid label error.

Disks judging from their raid label v10 have been used in v8+ system.

My v7.3.7 supports raid labels only up to v9.

I have tried i guess all published solutions to wipe v10 label. .... including maintenance mode, label makespare, unfail -s ... etc ...

All i get is " ..... not permitted as the disk has a bad RAID version."

No, filler with ontap 8+ to make them spares and zero is not available :smileysad:

 

Is there anything else i can try?

 

Thanks

1 ACCEPTED SOLUTION

Alfs29
13,020 Views

Follow up on problem.

 

FCUK all advanced commands etc ... nothing will help you to revert label V10 to V9 if you dont have Ontap 8+ system available to attach your shelf or drives to.

So i got FAS2040 controller from ghetto shop, plugged it into my FAS2020 chassis and did all as it is supposed to be done from ontap 8 ... 

So dont bother trying to revert or erase labels V10 from ontap 7.3.7 ... it is not working.

You cannot do it even with special boot menu 4, 4a or 25/7 too ... 

 

View solution in original post

9 REPLIES 9

bobshouseofcards
13,227 Views

The biggest stick you might try to hit this with is boot menu option "Clean configuration and initialize all disks" - which is not maintenance mode.  Not sure if it would deal with the label version issue, but I've had to resort to this option in the past to deal with particularly cranky disk identification.

 

Of course, initialize "all" disks does mean all - so either you wipe everything clean or get really creative.  You could for instance depending on your configuration shutdown, try removing all the current disks in your system and leave just the new shelf attached, boot off the compact flash and interrupt the boot cycle, then try the initialize routine on just the new shelf.  Of course when you reconnect/re-add the original disks (in exactly the same locations of course for safety) you will then likely need to get into maintenance mode on the next boot and clean up which aggregate is the "real" root aggregate, etc.  This isn't for the faint of heart, but considering as you say you're in the "learning to fly" mode, nothing like jumping off a really high cliff and learning "on the job".

 

Short of something quite this drastic - only other thing I can come up with is take each affected disk one by one into a different system and initiate a low level format on them - of course it means having a system with a FC-AL disk interface available - which I appreciate is not something generally lying around.

 

Hope this helps you out.

 

Bob

 

Alfs29
13,222 Views

I have read on mothergoogle that boot menu 4 or 4a doesn't help either 😕 ... but it sounds strange to me too ... only if netapp is not going "do you want to delete this file? do you really ..., do you really really ..., no ... i cannot delete this file because you might be sad later" 😄

 

System is not in production, so reinstall with netboot is still an option.

btw FAS2020 doesn't have CF card boot 😉

 

And formating it in just any other system is not that easy too because of WAFL formatting. It is not 512 as normaly, but 520 to save extra hash (or something) of current block nearby.

bobshouseofcards
13,210 Views

That's right - forgot about that 2020 detail.  Sorry about that.

 

The low level format suggestion doesn't worry about sector size - and actually the 520 byte sector is not a WAFL thing - it's at the disk level.  A low level format at the disk firmware level would know which size to deal with.

 

WAFL itself can deal with disks of either 520 or 512 byte sectors - for 520 the sector checksum is in the sector itself.  For 512 - the sector checksums are in other sectors - for instance zone checksum style uses 9 512-bytes sectors to store 8 512-byte sectors of data and 1 sector of checksums.

 

 

Alfs29
13,142 Views

HOLY CRAP!

 

Even initialize dont kill newer raid labels!

Any ideas?

 

1)  Normal boot.

(2)  Boot without /etc/rc.

(3)  Change password.

(4)  Initialize owned disks (26 disks are owned by this filer).

(4a) Same as option 4, but create a flexible root volume.

(5)  Maintenance mode boot.

 

Selection (1-5)? Fri Mar 27 16:20:32 GMT [shelf.config.single:info]: System is using single path attached storage only.

4a

The system has 26 disks assigned whereas it needs 3 to boot, will try to assign the required number.

Zero disks and install a new file system? yes

This will erase all the data on the disks, are you sure? yes

Disk '0b.16' contains a higher RAID label version 10. Expecting version 9. This disk will not be initialized. 

.......

Disk '0b.17' contains a higher RAID label version 10. Expecting version 9. This disk will not be initialized. 

 

Disks with BAD version raid labels have been detected. These disks will not be usable on this version of Ontap. If you proceed with this and later upgrade the version of Ontap, the system may panic because of multiple root volumes. Are you sure you want to continue?  FCUK NOOOOOOOOO!

 

shatfield
13,108 Views

The option you need is 4a_forced, but it's probably disabled on release kernels.  25/7 is worth a shot.  Otherwise you need to borrow an ontap 8 capable controller and either spare,zero,and unown them on it, or assign them all and execute a revert to 7.x

.

 

Alfs29
13,094 Views

Excuse my luck of knowledge or stupidity but what is 25/7 ??

If this was mentioned as 24/7 support then this is no go for me as all equipment comes from ghetto shop a.k.a ebay and has no active support.

shatfield
13,072 Views

no, there are a bunch of hidden options at the boot menu.  25/7 is "boot with labels forced to clean".  Not sure if it will help, since I don't know if that includes label versioning.  I'm not even sure it was there in 7.x.  I haven't been in front of one that old for a while.

 

Alfs29
13,057 Views

Treid 25/7 .... the good news is, yes this option is there.

The bad news is - its not working 😄

 

(5)  Maintenance mode boot.

 

Selection (1-5)? Wed Apr  1 15:53:13 GMT [shelf.config.single:info]: System is using single path attached storage only.

25/7

 

*** WARNING ********************************

You are enabling an option to force the RAID labels

clean. It is usually safer and easier just to run

WAFL_check or wafliron against degraded volumes.

Following this boot, you should run WAFL_check or

wafliron on all degraded volumes and disk scrub on

all volumes to ensure the integrity of the filesystem.

Wed Apr  1 15:53:20 GMT [fmmb.current.lock.disk:info]: Disk 0c.00.6 is a local HA mailbox disk.

Wed Apr  1 15:53:20 GMT [fmmb.current.lock.disk:info]: Disk 0c.00.10 is a local HA mailbox disk.

Wed Apr  1 15:53:20 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.

Wed Apr  1 15:53:20 GMT [raid.assim.disk.badlabelversion:error]: Disk 0b.16 Shelf ? Bay ? [NETAPP   X291_HVIPC420F15 NA02] S/N [J1WZ1E8N] has raid label with version (10), which is not within the currently supported range (5 - 9). Please contact NetApp Global Services.

....

and same with all drives 😕

 

Any more supersecret options to kill that MF? 😄

Alfs29
13,021 Views

Follow up on problem.

 

FCUK all advanced commands etc ... nothing will help you to revert label V10 to V9 if you dont have Ontap 8+ system available to attach your shelf or drives to.

So i got FAS2040 controller from ghetto shop, plugged it into my FAS2020 chassis and did all as it is supposed to be done from ontap 8 ... 

So dont bother trying to revert or erase labels V10 from ontap 7.3.7 ... it is not working.

You cannot do it even with special boot menu 4, 4a or 25/7 too ... 

 

Public