The biggest stick you might try to hit this with is boot menu option "Clean configuration and initialize all disks" - which is not maintenance mode. Not sure if it would deal with the label version issue, but I've had to resort to this option in the past to deal with particularly cranky disk identification.
Of course, initialize "all" disks does mean all - so either you wipe everything clean or get really creative. You could for instance depending on your configuration shutdown, try removing all the current disks in your system and leave just the new shelf attached, boot off the compact flash and interrupt the boot cycle, then try the initialize routine on just the new shelf. Of course when you reconnect/re-add the original disks (in exactly the same locations of course for safety) you will then likely need to get into maintenance mode on the next boot and clean up which aggregate is the "real" root aggregate, etc. This isn't for the faint of heart, but considering as you say you're in the "learning to fly" mode, nothing like jumping off a really high cliff and learning "on the job".
Short of something quite this drastic - only other thing I can come up with is take each affected disk one by one into a different system and initiate a low level format on them - of course it means having a system with a FC-AL disk interface available - which I appreciate is not something generally lying around.
I have read on mothergoogle that boot menu 4 or 4a doesn't help either 😕 ... but it sounds strange to me too ... only if netapp is not going "do you want to delete this file? do you really ..., do you really really ..., no ... i cannot delete this file because you might be sad later" 😄
System is not in production, so reinstall with netboot is still an option.
btw FAS2020 doesn't have CF card boot 😉
And formating it in just any other system is not that easy too because of WAFL formatting. It is not 512 as normaly, but 520 to save extra hash (or something) of current block nearby.
That's right - forgot about that 2020 detail. Sorry about that.
The low level format suggestion doesn't worry about sector size - and actually the 520 byte sector is not a WAFL thing - it's at the disk level. A low level format at the disk firmware level would know which size to deal with.
WAFL itself can deal with disks of either 520 or 512 byte sectors - for 520 the sector checksum is in the sector itself. For 512 - the sector checksums are in other sectors - for instance zone checksum style uses 9 512-bytes sectors to store 8 512-byte sectors of data and 1 sector of checksums.
(4) Initialize owned disks (26 disks are owned by this filer).
(4a) Same as option 4, but create a flexible root volume.
(5) Maintenance mode boot.
Selection (1-5)? Fri Mar 27 16:20:32 GMT [shelf.config.single:info]: System is using single path attached storage only.
The system has 26 disks assigned whereas it needs 3 to boot, will try to assign the required number.
Zero disks and install a new file system? yes
This will erase all the data on the disks, are you sure? yes
Disk '0b.16' contains a higher RAID label version 10. Expecting version 9. This disk will not be initialized.
Disk '0b.17' contains a higher RAID label version 10. Expecting version 9. This disk will not be initialized.
Disks with BAD version raid labels have been detected. These disks will not be usable on this version of Ontap. If you proceed with this and later upgrade the version of Ontap, the system may panic because of multiple root volumes. Are you sure you want to continue? FCUK NOOOOOOOOO!
The option you need is 4a_forced, but it's probably disabled on release kernels. 25/7 is worth a shot. Otherwise you need to borrow an ontap 8 capable controller and either spare,zero,and unown them on it, or assign them all and execute a revert to 7.x
no, there are a bunch of hidden options at the boot menu. 25/7 is "boot with labels forced to clean". Not sure if it will help, since I don't know if that includes label versioning. I'm not even sure it was there in 7.x. I haven't been in front of one that old for a while.
Treid 25/7 .... the good news is, yes this option is there.
The bad news is - its not working 😄
(5) Maintenance mode boot.
Selection (1-5)? Wed Apr 1 15:53:13 GMT [shelf.config.single:info]: System is using single path attached storage only.
*** WARNING ********************************
You are enabling an option to force the RAID labels
clean. It is usually safer and easier just to run
WAFL_check or wafliron against degraded volumes.
Following this boot, you should run WAFL_check or
wafliron on all degraded volumes and disk scrub on
all volumes to ensure the integrity of the filesystem.
Wed Apr 1 15:53:20 GMT [fmmb.current.lock.disk:info]: Disk 0c.00.6 is a local HA mailbox disk.
Wed Apr 1 15:53:20 GMT [fmmb.current.lock.disk:info]: Disk 0c.00.10 is a local HA mailbox disk.
Wed Apr 1 15:53:20 GMT [fmmb.instStat.change:info]: normal mailbox instance on local side.
Wed Apr 1 15:53:20 GMT [raid.assim.disk.badlabelversion:error]: Disk 0b.16 Shelf ? Bay ? [NETAPP X291_HVIPC420F15 NA02] S/N [J1WZ1E8N] has raid label with version (10), which is not within the currently supported range (5 - 9). Please contact NetApp Global Services.