Additional Virtualization Discussions

dm-multipath mappings not removed when disk is unmapped from host



Does anyone have experience with this Red Hat 4.x bug:

I have an unmapped lun that is no longer seen on the QLogic HBA bus, however, just as the bug describes, it's dm-multipath entries remain and all paths are down.  A few disk-related activities are refusing work or error out in a timely fashion; it feels as if multipathd has no error return for all paths being down?

Any specific documented procedure for removing the stale mappings would be appreciated.

Thank you!,


Message was edited by: Eugene V.



A kind soul on the RHEL mailing list has pointed out that this is related the the queue_if_no_path settings (as recommended by the FCP host utils guide).

Sounds like you multipath devices are set to "queue_if_no_path" which
basically tells the multipath layer to queue all request forever even if
there are no available paths.  You can generally free hung commands on a
device that is not coming back by setting "fail_if_no_path" on the
device with dmsetup.  The command would be something like this:

dmsetup message mpath23 0 fail_if_no_path

Once you do that the multipath layer should return errors to the hung
command and it should exit.

Once the command errors out you should be able to manually remove the
failed paths.  The "multipath -F" command should remove any unused
multipath maps, however, unused means that the VG needs to be
deactivated first.  Once you set "fail_if_no_path" you should be able to
deactivate the VG even though all paths are down.


Hi Eugene,

There's another bugizlla related to this issue:

There is a new keyword for multipath.conf, queue_without_daemon, which when set will discard I/O to the particular multipath map.

This features has made to RHEL5.3 but unfortunately it couldn't be included in RHEL 4.8.

This might be part of RHEL 4.9