ONTAP Discussions

How to reconfigure after replacing drives

JWEIDENHAMMER
8,028 Views

I'm currently having an issue with my company's FAS250. They wanted me to upgrade the storage from seagate chetah drives (120gb) to new hitachi iSCSI (450gb) drives. I've been researching to death the error messages I get after doing a 4a and trying to reconfigure and initalize the disks. Is there anything anyone can referance me to the below errors?

::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::;

SCSI.CMD.TRANSPORT ERROR:ERROR Disk device 0b.all transport error during execution of command

DISK FAIL MSG:ERROR

RAID CONFIG. FILESYSTEM.DISK.FAILED.ERROR

RAID.VOL.FAILED:CRITICAL

RAID.ASSIM.DISK.NOLABELS.ERROR

COREDUMP.SPARE.NONE:INFO

RAID.ASSIM.TREE.NOROOTVOL:ERROR

ROOT VOLUME IS FAILED

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

I also get while in maintenance mode and trying to upgrade_ownership

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

The upgrade ownership option requires a compact flash booted system with DS-14 shelves and up to date disk firmware

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Not to be complete, while labeling I get.

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Neither label appeared to be valid, the labels may be corrupt.

:::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::::

Anybody? Help?!?! Please?!?

1 ACCEPTED SOLUTION

ivisinfo
8,026 Views

The answer to your problem is simple:

*** You cannot use non-NetApp disks in netapp systems. ***

View solution in original post

10 REPLIES 10

aborzenkov
7,939 Views

Have you got your new disks from NetApp?

JWEIDENHAMMER
7,939 Views

No these were spares we had laying around the office that werent in use.

aborzenkov
7,939 Views

Could you please show the full console log starting from power on until you get these errors?

JWEIDENHAMMER
7,939 Views

CFE version 1.2.0 based on Broadcom CFE: 1.0.35 Copyright (C) 2000,2001,2002,2003 Broadcom Corporation. Portions Copyright (C) 2002,2003 Network Appliance Corporation. CPU type 0x1040102: 600MHz Total memory: 0x20000000 bytes (512MB) CFE> autoboot Loading: 0xffffffff80001000/21792 0xffffffff80006520/16410672 Entry at 0xfffffff f80001000 Starting program at 0xffffffff80001000 Press CTRL-C for special boot menu .....................................................Special boot options menu w ill be available. Fri Aug 17 13:06:50 GMT [fci.initialization.failed:error]: Initialization failed on Fibre Channel adapter 0c. NetApp Release 7.3.2: Thu Oct 15 04:24:11 PDT 2009 Copyright (c) 1992-2009 NetApp. Starting boot on Fri Aug 17 13:06:22 GMT 2012 Fri Aug 17 13:06:51 GMT [nvram.battery.state:info]: The NVRAM battery is current ly ON. Fri Aug 17 13:06:54 GMT [monitor.chassisPower.degraded:notice]: Chassis power is degraded:  sensor PSU 1 Fan 2 This boot is of OS version: NetApp Release 7.3.2. The last time this filer booted, it used OS version: . The WAFL/RAID versions of the previously booted OS are unknown. If you choose a boot option other than Maintenance mode or Initialize disks, the file system of your filer might be upgraded to a new version of the OS. If you do not want to risk having your file system upgraded, choose Maintenance mode or reboot using the correct OS version. (1)  Normal boot. (2)  Boot without /etc/rc. (3)  Change password. (4)  Initialize all disks. (4a) Same as option 4, but create a flexible root volume. (5)  Maintenance mode boot. Selection (1-5)? 4a Zero disks and install a new file system? y This will erase all the data on the disks, are you sure? y Zeroing disks takes about 1514 minutes. Fri Aug 17 13:07:01 GMT [coredump.spare.none:info]: No sparecore disk was found. ................................................................................ ................................................................................ ................................................................................ ................................................................................ ................................ ...................................Fri Aug 17 13:39:20 GMT [raid.disk.zero.done: notice]: Disk 0b.24 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [ JVYAVVSM] : disk zeroing complete .................................................Fri Aug 17 13:39:23 GMT [raid.d isk.zero.done:notice]: Disk 0b.20 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYGBASM] : disk zeroing complete ................................................................................ ...................................Fri Aug 17 13:39:32 GMT [raid.disk.zero.done: notice]: Disk 0b.21 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [ JVY56WYM] : disk zeroing complete ....Fri Aug 17 13:39:32 GMT [raid.disk.zero.done:notice]: Disk 0b.27 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYJVY9M] : disk zeroing comple te ...................Fri Aug 17 13:39:34 GMT [raid.disk.zero.done:notice]: Disk 0b .17 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYANJTM] : disk zeroing complete ...............Fri Aug 17 13:39:35 GMT [raid.disk.zero.done:notice]: Disk 0b.22 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVY8V7NM] : disk zer oing complete ................................................................................ ..................Fri Aug 17 13:39:42 GMT [raid.disk.zero.done:notice]: Disk 0b. 29 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYEYKPM] : disk zeroing complete ..................................Fri Aug 17 13:39:45 GMT [raid.disk.zero.done:n otice]: Disk 0b.18 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        J VYTEZ9M] : disk zeroing complete ................................................................................ ....Fri Aug 17 13:39:51 GMT [raid.disk.zero.done:notice]: Disk 0b.26 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYJVV3M] : disk zeroing comple te ................................................................................ ..........Fri Aug 17 13:39:58 GMT [raid.disk.zero.done:notice]: Disk 0b.25 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVY6X1RM] : disk zeroing complete ........Fri Aug 17 13:39:59 GMT [raid.disk.zero.done:notice]: Disk 0b.23 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYAVXMM] : disk zeroing co mplete .........................................................Fri Aug 17 13:40:03 GMT [raid.disk.zero.done:notice]: Disk 0b.16 Shelf ? Bay ? [HITACHI  HUS156045VLF40 0  F5D0] S/N [        JVYJVTHM] : disk zeroing complete ................................................................................ ................................................................................ ................................................................................ ....................Fri Aug 17 13:40:22 GMT [raid.disk.zero.done:notice]: Disk 0 b.19 Shelf ? Bay ? [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYK7U0M] : dis k zeroing complete Fri Aug 17 13:40:28 GMT [raid.disk.zero.done:notice]: Disk 0b.28 Shelf ? Bay ? [ HITACHI  HUS156045VLF400  F5D0] S/N [        JVYANJNM] : disk zeroing complete Fri Aug 17 13:40:29 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr0 /plex0/rg0/0b.18 Shelf 1 Bay 2 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVY TEZ9M] to aggregate aggr0 has completed successfully Fri Aug 17 13:40:30 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr0 /plex0/rg0/0b.17 Shelf 1 Bay 1 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVY ANJTM] to aggregate aggr0 has completed successfully Fri Aug 17 13:40:30 GMT [raid.vol.disk.add.done:notice]: Addition of Disk /aggr0 /plex0/rg0/0b.16 Shelf 1 Bay 0 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVY JVTHM] to aggregate aggr0 has completed successfully Fri Aug 17 13:40:30 GMT [wafl.vol.add:notice]: Aggregate aggr0 has been added to the system. Fri Aug 17 13:40:31 GMT [scsi.cmd.transportError:error]: Disk device 0b.16: Tran sport error during execution of command: HA status 0x10: cdb 0x28:00014ab7:0009. Fri Aug 17 13:40:31 GMT [disk.failmsg:error]: Disk 0b.16 (        JVYJVTHM): mes sage received. Fri Aug 17 13:40:31 GMT [scsi.cmd.transportError:error]: Disk device 0b.18: Tran sport error during execution of command: HA status 0x10: cdb 0x28:00014ab7:0009. Fri Aug 17 13:40:31 GMT [disk.failmsg:error]: Disk 0b.18 (        JVYTEZ9M): mes sage received. Fri Aug 17 13:40:31 GMT [scsi.cmd.transportError:error]: Disk device 0b.17: Tran sport error during execution of command: HA status 0x10: cdb 0x28:00014ab7:0009. Fri Aug 17 13:40:31 GMT [disk.failmsg:error]: Disk 0b.17 (        JVYANJTM): mes sage received. Fri Aug 17 13:40:32 GMT [raid.config.filesystem.disk.failed:error]: File system Disk /aggr0/plex0/rg0/0b.16 Shelf 1 Bay 0 [HITACHI  HUS156045VLF400  F5D0] S/N [         JVYJVTHM] failed. Fri Aug 17 13:40:32 GMT [raid.config.filesystem.disk.failed:error]: File system Disk /aggr0/plex0/rg0/0b.17 Shelf 1 Bay 1 [HITACHI  HUS156045VLF400  F5D0] S/N [         JVYANJTM] failed. Fri Aug 17 13:40:32 GMT [raid.config.filesystem.disk.failed:error]: File system Disk /aggr0/plex0/rg0/0b.18 Shelf 1 Bay 2 [HITACHI  HUS156045VLF400  F5D0] S/N [         JVYTEZ9M] failed. Fri Aug 17 13:40:32 GMT [raid.vol.failed:CRITICAL]: Aggregate aggr0: Failed due to multi-disk error Fri Aug 17 13:40:33 GMT [raid.disk.unload.done:info]: Unload of Disk 0b.16 Shelf 1 Bay 0 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYJVTHM] has completed s uccessfully Fri Aug 17 13:40:33 GMT [raid.disk.unload.done:info]: Unload of Disk 0b.17 Shelf 1 Bay 1 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYANJTM] has completed s uccessfully Fri Aug 17 13:40:33 GMT [raid.disk.unload.done:info]: Unload of Disk 0b.18 Shelf 1 Bay 2 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYTEZ9M] has completed s uccessfully PANIC: aggr aggr0: raid volfsm, fatal multi-disk error. raid type raid_dp Group name plex0/rg0 state NORMAL 3 disks failed in the group. Disk 0b.16 Shelf 1 Bay 0 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYJVTHM] error fatal disk error. Disk 0b.17 Shelf 1 Bay 1 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYANJTM] error fatal disk error. Disk 0b.18 Shelf 1 Bay 2 [HITACHI  HUS156045VLF400  F5D0] S/N [        JVYTEZ9M] error fatal disk error. in process config_thread on release NetApp Release 7.3. 2 on Fri Aug 17 13:42:33 GMT 2012 version: NetApp Release 7.3.2: Thu Oct 15 04:24:11 PDT 2009 cc flags: 6O DUMPCORE: START DUMPCORE: END -- coredump *NOT* written. halt after panic during system initialization CFE version 1.2.0 based on Broadcom CFE: 1.0.35 Copyright (C) 2000,2001,2002,2003 Broadcom Corporation. Portions Copyright (C) 2002,2003 Network Appliance Corporation. CPU type 0x1040102: 600MHz Total memory: 0x20000000 bytes (512MB)

JWEIDENHAMMER
7,939 Views

Sorry for the bad formating.

aborzenkov
7,939 Views

I cannot find any disk with vendor string HUS156045VLF400 (or any HUS* for that matter) in current qualified disks database. So the question remains - where have you got these disks from? It is not supported to use third party disks in NetApp. NetApp is using customized disk firmware.

JWEIDENHAMMER
7,939 Views

Where may I find the current qualified disk database, for referance? I wish I could tell you where these disks came from, they just had them as spare laying around someone's office. If the answer is that netapp doesnt support 3rd party disks then i'm fine with that answer. I'm am still curious as i've heard of a work around but can't seem to find anything on it. Regaurdless thank you for your help and assistance.

JWEIDENHAMMER
5,922 Views

Replaced the first 3 disks with the old ones. Can I add these disks at all for use? The below is what I get. fas250>  disk usage: disk Options are:         fail [-i] [-f]   - fail a file system disk         remove [-w]     - remove a spare disk         swap                        - prepare (quiet) bus for swap         unswap                      - undo disk swap and resume service         scrub  { start | stop }    - start or stop disk scrubbing         assign { | all | [-T ] -n | auto} [-p < pool>] [-o ] [-s ] [-c block|zoned] [-f] - assign a disk to a filer or all unowned disks by specifying "all"  or number of unowned dis ks         show [-o | -s | -n | -v | -a]  - lists disks and own ers         replace {start [-f] [-m] } | {stop } - replace a file system disk with a spare disk or stop replacing         zero spares                - Zero all spare disks         checksum { | all} [-c block | zoned]         sanitize { start | abort | status | release } - sanitize one or more dis ks         maint { start | abort | status | list} - run maintenance tests on one or more disks fas250>  disk show   DISK      OWNER                  POOL  SERIAL NUMBER ------------ -------------          -----  ------------- 0b.17        fas250    (84168595)  Pool0  404Y8230 0b.16        fas250    (84168595)  Pool0  40526833 0b.18        fas250    (84168595)  Pool0  40523440 NOTE: Currently 11 disks are unowned. Use 'disk show -n' for additional informat ion. fas250>  disk show -n   DISK      OWNER                  POOL  SERIAL NUMBER ------------ -------------          -----  ------------- 0b.27        Not Owned              NONE  JVYJVY9M 0b.28        Not Owned              NONE  JVYANJNM 0b.25        Not Owned              NONE  JVY6X1RM 0b.21        Not Owned              NONE  JVY56WYM 0b.23        Not Owned              NONE  JVYAVXMM 0b.26        Not Owned              NONE  JVYJVV3M 0b.29        Not Owned              NONE  JVYEYKPM 0b.20        Not Owned              NONE  JVYGBASM 0b.22        Not Owned              NONE  JVY8V7NM 0b.19        Not Owned              NONE  JVYK7U0M 0b.24        Not Owned              NONE  JVYAVVSM fas250> diskMon Aug 20 15:24:04 GMT last message repeated 2 times Mon Aug 20 15:24:25 GMT [asup.config.minimal.unavailable:warning]: Minimal Autos upports unavailable. Could not read /etc/asup_content.conf fas250> disk assign 0b.27 Mon Aug 20 15:24:34 GMT [diskown.changingOwner:info]: changing ownership for dis k 0b.27 (S/N        JVYJVY9M) from unowned (ID -1) to fas250 (ID 84168595) fas250> Mon Aug 20 15:24:34 GMT [config.ATAnotSupported:error]: ATA disks are no t supported on this appliance. Disk 0b.27 is an ATA disk and should be removed. Mon Aug 20 15:24:35 GMT [disk.dynamicqual.failure.missingFile:error]: Device Qua lification information file (/etc/qual_devices) is missing. Please refer to the qual_devices man page for corrective action. This problem must be corrected with in 72 hour(s) to avoid a forced system shutdown. The following disk(s) remain un qualified: 0b.19 [S/N        JVYK7U0M], 0b.20 [S/N        JVYGBASM], 0b.21 [S/ N        JVY56WYM], 0b.22 [S/N        JVY8V7NM], 0b.23 [S/N        JVYAVXMM], 0b.24 [S/N        JVYAVVSM], Mon Aug 20 15:24:35 GMT [sfu.firmwareUpToDate:info]: Firmware is up-to-date on a ll disk shelves. Mon Aug 20 15:24:35 GMT [sysconfig.sysconfigtab.openFailed:notice]: sysconfig: t able of valid configurations (/etc/sysconfigtab) is missing. Mon Aug 20 15:24:37 GMT [sysconfig.sysconfigtab.openFailed:notice]: sysconfig: t able of valid configurations (/etc/sysconfigtab) is missing.

JWEIDENHAMMER
7,939 Views

The attached text doc may be a better way to view that.

ivisinfo
8,027 Views

The answer to your problem is simple:

*** You cannot use non-NetApp disks in netapp systems. ***

Public