We replaced a failed disk and now only one of the controllers sees the new disk. The disk got assigned to the controller that doesn't see it and now it can't be used at all. Is there anything we can do to see the disk on both controllers? Or at least reassign the disk to the other controller?
On one controller the disk shows up under sysconfig -a and on the other it does not. I see this in the sys log "Wed Jul 29 12:30:55 EDT [xxxxx: cf.disk.inventory.mismatch:CRITICAL]: Status of the disk ?.? (500605BA:009FFF74:00000000:00000000:00000000:00000000:00000000:00000000:00000000:00000000) has recently changed or the node (xxxxx) is missing the disk."
Try reseating disk - pull out, wait a couple of minutes, plug back in. Otherwise it could be disk or backplane issue and you really need to open support case - it is hard to say anything without seeing past events.
do a "disk show -a" on both nodes, wait a few minutes, then do it again and compare the results. if they don't match, compare the output of "disk_list" on both nodes (you need to be in diag mode for this). This shows the lowest physical level of the disks. If they don't show up there then it's most probably a hardware issue of some sort. If the disks are there then it's simply a mismatch of what the node(s) think the disks are. This can probably be resolved via "disk assign", "disk remove_ownership", etc.
Hm. Looks like you have a shelf which assumed a soft FC ID. This is not good. It normally means that you have a dumplicate shelf ID in the same stack. But in this case it's the internal disks which is odd. I would suggest you open a support case for this. You might need to re-seat some IO modules or reboot the filer to clear this inconsistent state