Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Trying to set up a new system as below (part of a FlexPod implementation):
Dual Chassis
2 FAS3240
1 ds2246
1 ds4243
Ontap 8.0.2 7-mode
Cabling is as the NetApp guide for dual chassis HA
One controller boot fine & can be configured; the other complains that it has 0 disks & continually reboots.
The problem controller will boot into maintenance mode, but the "disk show" & "disk assign" commands don't work as expected, showing no unassigned disks available, although "disk show" on the working controller shows half of the disks unassigned.
I've tried swapping the controllers over to eliminate cabling issues & the fault remains with the same physical controller.
I'd appreciate any ideas as I've run out of them, output from a failed boot cycle below:
Sun Oct 14 09:51:42 GMT [config.noPartnerDisks:CRITICAL]: No disks were detected for the partner; this node will be unable to takeover correctly
Sun Oct 14 09:51:42 GMT [callhome.dsk.config:warning]: Call home for DISK CONFIGURATION ERROR
Sun Oct 14 09:51:43 GMT [fmmb.instStat.change:info]: no mailbox instance on local side.
Sun Oct 14 09:51:43 GMT [fmmb.instStat.change:info]: no mailbox instance on partner side.
Sun Oct 14 09:51:43 GMT [cf.fm.noMBDisksOrIc:warning]: Could not find the local mailbox disks. Could not determine the firmware state of the partner through the HA interconnect.
WARNING: 0 disks found!
Storage Adapters found:
0 Fibre Channel Storage Adapters found!
6 SAS Adapters found!
0 Parallel SCSI Storage Adapters found!
0 ATA Adapters found!
Target Adapters found:
4 Fibre Channel Target Adapters found!
2 iSCSI Target Adapters found!
1 Unknown Target Adapters found!
Check that disks have been assigned ownership to this system (ID 1575087607) using the 'disk show' and 'disk assign'
commands from maintenance mode.
Uptime: 45s
System rebooting...
Solved! See The Solution
1 ACCEPTED SOLUTION
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am guessing that all of the disks are owned by the other controller. If they have all been assigned to the other controller, but not added to an aggregate, you can change the ownership to the other controller by using the "disk assign <disk_id> -o unowned" command.
Once you do that, 'disk show -n' should now show that disk as being unowned.
You will need a minimum of three disks to create an aggregate. They will need to be zeroed out as well. This will take quite some time if they are SATA drives. Not so long if they are SAS drives.
If the disks have already been added to an aggregate, you will need to destroy the aggregate before using the "disk assign <disk_id> -o unowned" command.
12 REPLIES 12
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Are ALL of the disks owned by the other controller?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
On the controller that is working correctly, what is the output of "disk show"?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
One more thing to check on the controller that is working correctly: What is the output of 'disk show -n'? This command will show you disks that are unowned and available to be assigned to the other controller.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I'll have to try this tomorrow, the building has just closed
--
JonS
Sent from my Blackberry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I can interrupt the boot, option 4 doesn't change anything though, goes through the process to create a new file system, then drops back into the boot loop
--
JonS
Sent from my Blackberry
migration has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I am guessing that all of the disks are owned by the other controller. If they have all been assigned to the other controller, but not added to an aggregate, you can change the ownership to the other controller by using the "disk assign <disk_id> -o unowned" command.
Once you do that, 'disk show -n' should now show that disk as being unowned.
You will need a minimum of three disks to create an aggregate. They will need to be zeroed out as well. This will take quite some time if they are SATA drives. Not so long if they are SAS drives.
If the disks have already been added to an aggregate, you will need to destroy the aggregate before using the "disk assign <disk_id> -o unowned" command.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
yes, you can unassign the first node required number of disks and change the ownership, it should work.
thank you
AKG
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
When I did a straight "disk show" on the working controller half the disks had this controller listed as owner, & half had nothing in the owner field.
I'll try the "disk show -n" as soon as the building is open tomorrow.
--
JonS
Sent from my Blackberry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
So assign unowned disks to other controller.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
He said he already tried that in his original post.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
OK. He said “disk assign shows no unassigned disks”, so they belong to some other filer. They need to be reassign to this controller.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi there,
According to you, you are able to see the half number of disks on the first node which is up and running.
go into the maintainence mode of troubled node while booting and issue disk show -n and sysconfig -a to make sure you are able to see all the loops and associated disks.
Then bring all the disk show -n output to a excel sheet and segregate the disks owned by node 1 is which is up and running.
the left over disks must be owned by former controller or previous used shelfs, So change the ownership of them to system-id of your troubled node.
Once owner ship is changed and zero the disks, it may take couple of hours to complete but you are good to go from there.
If you still have issues, contact netapp or professional services.
thank you,
aK G
