ONTAP Hardware

Oracle Linux 6.5 multipath misconfiguration

cdthurman
2,701 Views

We inherited a Oracle Linux 6.5 server with a multipathing problem.  It's connected to a FAS3170 HA pair, 8.1.3P1 7-mode, with 2xQLE2560 HBAs.  ALUA is enabled on the igroups, SAN zoning looks correct, but all I/O traverses one HBA and one target port on the NetApp.  This is causing a lot of traffic to traverse the vtic and "FCP partner path mismatch" errors.

 

On the server, we see multipathd errors getting device and ioctl errors "adding target to table." multipath -ll output looks like this:

 

mpathv (WWID...) dm-29 NETAPP,LUN

size=500G features=`1 queue_if_no_path` hwhandler=`1 alua` wp=rw

|-+- policy='round-robin 0' prio=50 status=active

|  |- 7:0:1:13  sdau 66:224 active ready running

|  `- 8:0:0:13  sdca 68:224 active ready running

 

`-+- policy='round-robin 0' prio=10 status=enabled

    |   7:0:0:13 sdo   8:224   active ready running

    `- 8:0:1:13 sddg  70:224 active ready running

 

 

NetApp Linux Host Utility Kit version 6.2 is installed.  The sanlun lun show -pv all output for one lun:

 

ONTAP Path:  netapp3:/vol/DataA/data1.lun

LUN:  13

LUN size:  500.1g

Controller CF State: Cluster Enabled

Controller Partner:  netapp4

Mode:  7

Multipath Provider:  Unknown

-------

state            path type        /dev/ node          host adapter           controller target port

up                secondary       sddg                    host8                       0c

up                primary            sdca                    host8                       0d

up                primary            sdau                   host7                       0c

up                secondary       sdo                     host7                       0d

 

On this disk, iostat shows all I/O going to /dev/sdo, from adapter host7 to target netapp4:0d across the vtic to netapp3. 

 

 

I think there's something wrong in the multipath.conf:

 

defaults {
    user_friendly_names    yes
    max_fds            max
    flush_on_last_del    yes
    queue_without_daemon    no    
}

devices {
    device {
        vendor            "NETAPP"
        product            "LUN"
        path_grouping_policy    group_by_prio
        prio            "alua"
        getuid_callout        "/lib/udev/scsi_id -g -u -d /dev/%n"
        path_checker        tur
        path_selector        "round-robin 0"
        hardware_handler    "1 alua"
        failback        immediate
        rr_weight        uniform
        rr_min_io        128
        no_path_retry        queue
    }
}

 

I notice that user_friendly_names is set to "yes" where best practice is "no."  I want to change this, but I don't know if this is the problem.

 

Are the multipath.conf defaults and device section necessary anymore in after 6.4?  The HUK 6.2 Installation and Setup Guide and Recommended Host Settings aren't clear.  The Install Guide  makes it seem the blacklist is all that's needed.  The SnapDrive 5.2.2 documentation is clear on pages 131-132 :  You do not have to set any values in the /etc/multipath.conf file if you are using either Red Hat Enterprise Linux (RHEL) 6.4 or later or Oracle Linux 6.4 or later. However, SnapDrive for UNIX configuration you must still maintain a dummy /etc/multipath.conf file, which can either be empty or contain the blacklisting info, if required.

 

Do these errors look like a multipath.conf problem?

 

Thank you for reading and any advice would be appreciated.

 

Clint

 

Knowledge Base Articles used:

What do FCP Partner Path Misconfigured messages mean?

How to verify Linux fibre channel configurations with multipathing I/O (MPIO)

 

0 REPLIES 0
Public