ONTAP Hardware

FAS8040 cabling with one DS4243

DangvPham
6,405 Views

Hi experts,

 

 I have a FAS8040 and one disk-shelf DS4243. Is that possible to connect the filers to one disk-shelf?

Any cable diagram for that connection?

 

Thank you so much,

Dang

 

 

1 ACCEPTED SOLUTION

amans
6,298 Views

 

Hi Dang,


You can check if the node is able to view the disks via sysconfig -v from the maintenance mode.

Also, disk show -v command can tell you if any disks are seen on the local node.

 

I have attached the cabling diagram, you can verify if it has been cabled this way.

 

Regards,

Aman

View solution in original post

4 REPLIES 4

amans
6,374 Views

 

Hello Dang,

 

Yes, you can attach the external shelf (s) in FAS8040.


You can follow review the 8040 Installation guide (page 3).

 

https://library.netapp.com/ecm/ecm_download_file/ECMP1199907

 

Note: DS4243 (IOM3) shelf modules are not supported on OS 9.4 and above.

 

Regards,

Aman

 

 

 

 

 

DangvPham
6,350 Views

Thanks, Aman, 

 

My issue is connecting FAS8040 to only 1 DS4243. It seems hard to find the specified diagram, also after that I stuck in LOADER Mode. Below is the error message: 

----------------------------

Starting AUTOBOOT press Ctrl-C to abort...
Loading X86_64/freebsd/image1/kernel:0x100000/7950592 0x895100/4206472 Entry at 0xffffffff80171230
Loading X86_64/freebsd/image1/platform.ko:0x1000000/1985879 0x11e5000/288800 0x122b820/272560
Starting program at 0xffffffff80171230
NetApp Data ONTAP 8.3
atkbd: unable to get the current command byte value.

create polling thread
Copyright (C) 1992-2015 NetApp.
All rights reserved.
Checking boot device filesystem
** /dev/da0s1
** Phase 1 - Read and Compare FATs
** Phase 2 - Check Cluster Chains
** Phase 3 - Checking Directories
** Phase 4 - Checking for Lost Files
65 files, 2311392 free (1626424 clusters)
MARK FILE SYSTEM CLEAN? yes
MARKING FILE SYSTEM CLEAN
Retry #1 of 5: /sbin/fsck_msdosfs /dev/da0s1
Retry #2 of 5: /sbin/fsck_msdosfs /dev/da0s1
Repaired boot device filesystem
*******************************
* *
* Press Ctrl-C for Boot Menu. *
* *
*******************************
Jul 28 04:40:01 Battery charge capacity: 3840 mA*hr. Power outage protection flash de-staging cycles: 57
ixgbe: e0c: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
original max threads=40, original heap size=41943040
bip_nitro Virtual Size Limit=309182464 Bytes
bip_nitro: user memory=4134682624, actual max threads=236, actual heap size=247463936
ixgbe: e0d: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
ixgbe: e0a: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
ixgbe: e0b: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version in flash: 5.4.9
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version bundled: 5.4.51
Jul 28 04:41:11 [localhost:config.noPartnerDisks:CRITICAL]: No disks were detected for the partner; this node cannot perform takeover correctly.
WAFL CPLEDGER is enabled. Checklist = 0x7ff841ff
Jul 28 04:41:11 [localhost:callhome.dsk.config:warning]: Call home for DISK CONFIGURATION ERROR
Jul 28 04:41:13 [localhost:cf.fm.noMBDisksOrIc:warning]: Could not find the local mailbox disks. Could not determine the firmware state of the partner through the HA interconnect.
PANIC : raid: Unable to find root aggregate. Reason: Unknown. (DS=3, DL=3, DA=3, BDTOC=0, BDLBL=0, BLMAG=4 BLCRC=8, BLVER=0, BLSZ=0, BLTOC=0, BLOBJ=0)
version: 8.3: Mon Mar 9 19:20:57 PDT 2015
conf : x86_64.optimize
cpuid = 0
Uptime: 1m39s

PANIC: raid: Unable to find root aggregate. Reason: Unknown. (DS=3, DL=3, DA=3, BDTOC=0, BDLBL=0, BLMAG=4 BLCRC=8, BLVER=0, BLSZ=0, BLTOC=0, BLOBJ=0) in SK process rc on release 8.3 (C) on Tue Jul 28 04:41:13 GMT 2020
version: 8.3: Mon Mar 9 19:20:57 PDT 2015
compile flags: x86_64.optimize
Writing panic info to HA mailbox disks.
HA: current time (in sk_msecs) 34906 (in sk_cycles) 2802805569703
DUMPCORE: START
Dumping to disks: 0a.03.2
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
DUMPCORE: END -- coredump written.

 

----------------

It could be the wrong cable connect as I guest. 

 

Thanks, 

Dang

 

 

DangvPham
6,361 Views

Hi Aman, 

It seems my issue is connecting Dual filer FAS8040 to only one DS4243. I can't find any specified diagram for my case. So, it could be wrong cable connection as I guest. Below is the messages from my console. 

Jul 28 04:21:42 Battery charge capacity: 3840 mA*hr. Power outage protection flash de-staging cycles: 57
ixgbe: e0c: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
original max threads=40, original heap size=41943040
bip_nitro Virtual Size Limit=309182464 Bytes
bip_nitro: user memory=4134879232, actual max threads=236, actual heap size=247463936
ixgbe: e0d: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
ixgbe: e0a: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
ixgbe: e0b: ** JUMBOMBUF DEBUG ** switching to large buffers(9k -> 3k): (sz = 10240)!
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version in flash: 5.4.9
qla_init_hw: CRBinit running ok: 8c633f
NIC FW version bundled: 5.4.51
Jul 28 04:22:55 [localhost:config.noPartnerDisks:CRITICAL]: No disks were detected for the partner; this node cannot perform takeover correctly.
WAFL CPLEDGER is enabled. Checklist = 0x7ff841ff
Jul 28 04:22:55 [localhost:callhome.dsk.config:warning]: Call home for DISK CONFIGURATION ERROR
Jul 28 04:22:57 [localhost:raid.assim.tree.noRootVol:error]: No usable root volume was found!
PANIC : raid: Unable to find root aggregate. Reason: Unknown. (DS=3, DL=3, DA=3, BDTOC=0, BDLBL=0, BLMAG=0 BLCRC=12, BLVER=0, BLSZ=0, BLTOC=0, BLOBJ=0)
version: 8.3: Mon Mar 9 19:20:57 PDT 2015
conf : x86_64.optimize
cpuid = 0
Uptime: 2m10s

PANIC: raid: Unable to find root aggregate. Reason: Unknown. (DS=3, DL=3, DA=3, BDTOC=0, BDLBL=0, BLMAG=0 BLCRC=12, BLVER=0, BLSZ=0, BLTOC=0, BLOBJ=0) in SK process rc on release 8.3 (C) on Tue Jul 28 04:22:57 GMT 2020
version: 8.3: Mon Mar 9 19:20:57 PDT 2015
compile flags: x86_64.optimize
Writing panic info to HA mailbox disks.
HA: current time (in sk_msecs) 39291 (in sk_cycles) 504635177632
DUMPCORE: START
Dumping to disks: 0a.03.2
....................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................................
DUMPCORE: END -- coredump written.
System halting...
BIOS version: 9.3
Portions Copyright (c) 2011-2014 NetApp. All Rights Reserved
Phoenix SecureCore Tiano(TM)
Copyright 1985-2020 Phoenix Technologies Ltd.
All Rights Reserved

Build Date: 12/02/2014
**********************************************
* 9.3 *
* ================================== *
* PHOENIX SC-T 2009-2020 *
**********************************************

 

Thanks, 

Dang

amans
6,299 Views

 

Hi Dang,


You can check if the node is able to view the disks via sysconfig -v from the maintenance mode.

Also, disk show -v command can tell you if any disks are seen on the local node.

 

I have attached the cabling diagram, you can verify if it has been cabled this way.

 

Regards,

Aman

Public