I have an unmapped lun that is no longer seen on the QLogic HBA bus, however, just as the bug describes, it's dm-multipath entries remain and all paths are down. A few disk-related activities are refusing work or error out in a timely fashion; it feels as if multipathd has no error return for all paths being down?
Any specific documented procedure for removing the stale mappings would be appreciated.
A kind soul on the RHEL mailing list has pointed out that this is related the the queue_if_no_path settings (as recommended by the FCP host utils guide).
Sounds like you multipath devices are set to "queue_if_no_path" which basically tells the multipath layer to queue all request forever even if there are no available paths. You can generally free hung commands on a device that is not coming back by setting "fail_if_no_path" on the device with dmsetup. The command would be something like this:
dmsetup message mpath23 0 fail_if_no_path
Once you do that the multipath layer should return errors to the hung command and it should exit.
Once the command errors out you should be able to manually remove the failed paths. The "multipath -F" command should remove any unused multipath maps, however, unused means that the VG needs to be deactivated first. Once you set "fail_if_no_path" you should be able to deactivate the VG even though all paths are down.
Re: dm-multipath mappings not removed when disk is unmapped from host