ONTAP Discussions

Debian Linux multipath.conf

ralfgross
4,855 Views

Hello,


this is the first time I try to access a NetApp LUN from a Debian Lenny Linux System. AFAIK Debian is not supported, that's why I ask here. I've no control over the NetApp filers, I'm only responsible for the linux server. This is a test if/how it's possible to access the LUNs with Debian Linux.


The setup seems to work, after setting multipathing up as described below I see the device, can create a filesystem and mount it. The only problem is that the performance is a bit low (50 MB/s). Thus it would be nice if someone could check if there is something wrong with my config/setup.


[Code]

$ls -l /dev/mapper/netapp-lun1
brw-rw---- 1 root disk 254, 0 30. Nov 18:34 /dev/mapper/netapp-lun1

[/Code]


[Code]

$df -h

Filesystem            Size  Used Avail Use% Mounted on
....
/dev/mapper/netapp-lun1
                      2.0T   22G  2.0T   2% /mnt

[/Code]



I installed the netapp_linux_host_utilities-5-3.i386.rpm package by simply dropping the content to /opt and /usr.


[Code]

$sanlun lun show -v

controller:              lun-pathname             device filename  adapter  protocol          lun size         lun state
filer1:  /vol/fcp_f008_lin00/lin00/lin00.lun  /dev/sdc         host3    FCP            2t (2199023255552)  GOOD    
    Serial number: Hn/4mJ/ZQr7q
    Controller FCP nodename:500a09808779788f  Controller FCP portname:500a09819779788f
    Controller adapter name: v.0c
    Controller IP address:    53.60.10.72
    Controller volume name:fcp_f008_lin00   FSID:0x15fe9263
    Controller qtree name:/vol/fcp_f008_lin00/lin00   ID:0x1000000
    Controller snapshot name:   ID:0x0
filer1:  /vol/fcp_f008_lin00/lin00/lin00.lun  /dev/sdd         host3    FCP            2t (2199023255552)  GOOD    
    Serial number: Hn/4mJ/ZQr7q
    Controller FCP nodename:500a09808779788f  Controller FCP portname:500a09828779788f
    Controller adapter name: 0d
    Controller IP address:    53.60.10.72
    Controller volume name:fcp_f008_lin00   FSID:0x15fe9263
    Controller qtree name:/vol/fcp_f008_lin00/lin00   ID:0x1000000
    Controller snapshot name:   ID:0x0
filer1:  /vol/fcp_f008_lin00/lin00/lin00.lun  /dev/sda         host0    FCP            2t (2199023255552)  GOOD    
    Serial number: Hn/4mJ/ZQr7q
    Controller FCP nodename:500a09808779788f  Controller FCP portname:500a09829779788f
    Controller adapter name: v.0d
    Controller IP address:    53.60.10.72
    Controller volume name:fcp_f008_lin00   FSID:0x15fe9263
    Controller qtree name:/vol/fcp_f008_lin00/lin00   ID:0x1000000
    Controller snapshot name:   ID:0x0
filer1:  /vol/fcp_f008_lin00/lin00/lin00.lun  /dev/sdb         host0    FCP            2t (2199023255552)  GOOD    
    Serial number: Hn/4mJ/ZQr7q
    Controller FCP nodename:500a09808779788f  Controller FCP portname:500a09818779788f
    Controller adapter name: 0c
    Controller IP address:    53.60.10.72
    Controller volume name:fcp_f008_lin00   FSID:0x15fe9263
    Controller qtree name:/vol/fcp_f008_lin00/lin00   ID:0x1000000
    Controller snapshot name:   ID:0x0

[/Code]


Then I configured multipathing by following an example in the KB (http://now.netapp.com/NOW/knowledge/docs/hba/linux/rellinuxhu52/html/software/setup/GUID-C5CFA561-7FD1-45F2-8BF8-1C7F04910274.html)


[Code]

defaults {
        user_friendly_names       yes
        max_fds                         max
        queue_without_daemon    no
}



devices {
    device {
            vendor             "NETAPP"
            product         "LUN"
            getuid_callout          "/lib/udev/scsi_id -g -u -s /block/%n"
            prio_callout            "/sbin/mpath_prio_netapp /dev/%n"
            features                "1 queue_if_no_path"
            hardware_handler        "0"
            path_grouping_policy    group_by_prio
            failback                immediate
            rr_weight               uniform
            rr_min_io               128
            path_checker            directio
            flush_on_last_del       yes
    }
}


multipaths {
        multipath {
               wwid    360a98000486e2f346d4a2f5a51723771
               alias   netapp-lun1
        }
}

[/Code]



Multipath state:


[Code]

$multipath -ll -v3
cciss!c0d0: device node name blacklisted
sdb: not found in pathvec
sdb: mask = 0x5
sdb: dev_t = 8:16
sdb: size = 4294967296
sdb: subsystem = scsi
sdb: vendor = NETAPP 
sdb: product = LUN            
sdb: rev = 7340
sdb: h:b:t:l = 0:0:1:0
sdb: tgt_node_name = 0x500a09808779788
sda: not found in pathvec
sda: mask = 0x5
sda: dev_t = 8:0
sda: size = 4294967296
sda: subsystem = scsi
sda: vendor = NETAPP 
sda: product = LUN            
sda: rev = 7340
sda: h:b:t:l = 0:0:0:0
sda: tgt_node_name = 0x500a09808779788
sr0: device node name blacklisted
dm-0: device node name blacklisted
sdd: not found in pathvec
sdd: mask = 0x5
sdd: dev_t = 8:48
sdd: size = 4294967296
sdd: subsystem = scsi
sdd: vendor = NETAPP 
sdd: product = LUN            
sdd: rev = 7340
sdd: h:b:t:l = 3:0:1:0
sdd: tgt_node_name = 0x500a09808779788
sdc: not found in pathvec
sdc: mask = 0x5
sdc: dev_t = 8:32
sdc: size = 4294967296
sdc: subsystem = scsi
sdc: vendor = NETAPP 
sdc: product = LUN            
sdc: rev = 7340
sdc: h:b:t:l = 3:0:0:0
sdc: tgt_node_name = 0x500a09808779788
===== paths list =====
uuid hcil    dev dev_t pri dm_st  chk_st  vend/prod/rev           
     0:0:1:0 sdb 8:16  -1  [undef][undef] NETAPP  ,LUN            
     0:0:0:0 sda 8:0   -1  [undef][undef] NETAPP  ,LUN            
     3:0:1:0 sdd 8:48  -1  [undef][undef] NETAPP  ,LUN            
     3:0:0:0 sdc 8:32  -1  [undef][undef] NETAPP  ,LUN            
params = 1 queue_if_no_path 0 2 1 round-robin 0 2 1 8:16 128 8:48 128 round-robin 0 2 1 8:0 128 8:32 128
status = 2 0 0 0 2 1 A 0 2 0 8:16 A 0 8:48 A 0 E 0 2 0 8:0 A 0 8:32 A 0
sdb: mask = 0x4
sdb: path checker = directio (controller setting)
directio: starting new request
directio: async io getevents returns 1 (errno=No such file or directory)
directio: io finished 4096/0
sdb: state = 2
sdb: mask = 0x8
sdb: getprio = /sbin/mpath_prio_netapp /dev/%n (controller setting)
sdb: prio = 4
sdd: mask = 0x4
sdd: path checker = directio (controller setting)
directio: starting new request
directio: async io getevents returns 1 (errno=No such file or directory)
directio: io finished 4096/0
sdd: state = 2
sdd: mask = 0x8
sdd: getprio = /sbin/mpath_prio_netapp /dev/%n (controller setting)
sdd: prio = 4
sda: mask = 0x4
sda: path checker = directio (controller setting)
directio: starting new request
directio: async io getevents returns 1 (errno=No such file or directory)
directio: io finished 4096/0
sda: state = 2
sda: mask = 0x8
sda: getprio = /sbin/mpath_prio_netapp /dev/%n (controller setting)
sda: prio = 1
sdc: mask = 0x4
sdc: path checker = directio (controller setting)
directio: starting new request
directio: async io getevents returns 1 (errno=No such file or directory)
directio: io finished 4096/0
sdc: state = 2
sdc: mask = 0x8
sdc: getprio = /sbin/mpath_prio_netapp /dev/%n (controller setting)
sdc: prio = 1
netapp-lun1 (360a98000486e2f346d4a2f5a51723771) dm-0 NETAPP  ,LUN          
[size=2.0T][features=1 queue_if_no_path][hwhandler=0]
\_ round-robin 0 [prio=8][active]
\_ 0:0:1:0 sdb 8:16  [active][ready]
\_ 3:0:1:0 sdd 8:48  [active][ready]
\_ round-robin 0 [prio=2][enabled]
\_ 0:0:0:0 sda 8:0   [active][ready]
\_ 3:0:0:0 sdc 8:32  [active][ready]

[/Code]


3 REPLIES 3

stevensmithSCC
4,855 Views

Hi,

Can you advise if your NetApp Storage System is a cluster or single head?

Cheers

ralfgross
4,855 Views

Sorry for not providing this information... it's a metro cluster. In the mean time I was told from the storage admin, that I should use mpath_prio_alua instead of mpath_prio_netapp. My current config looks like this:

defaults {
        user_friendly_names yes
        max_fds                 max
        queue_without_daemon    no
}

blacklist {
        wwid <DevId>
        devnode "^(ram|raw|loop|fd|md|dm-|sr|scd|st)[0-9]*"
        devnode "^hd[a-z]"
        devnode "^cciss!c[0-9]d[0-9]*[p[0-9]*]"
}

devices {
        device {
                vendor                  "NETAPP"
                product                 "LUN"
                getuid_callout          "/lib/udev/scsi_id -g -u -s /block/%n"
                #prio_callout            "/sbin/mpath_prio_netapp /dev/%n"
                prio_callout            "/sbin/mpath_prio_alua /dev/%n"
                features                "1 queue_if_no_path"
                hardware_handler        "0"
                path_grouping_policy    group_by_prio
                failback                immediate
                rr_weight               uniform
                rr_min_io               128
                path_checker            directio
                flush_on_last_del       yes
        }
}

multipaths {
        multipath {
               wwid    360a98000486e2f346d4a2f5a51723771
               alias   netapp-lun1
        }
}

stevensmithSCC
4,855 Views

If you had a cluster the first thing I was going to suggest was to change the prio_callout to ALUA. Without ALUA in a metro-cluster environment i would expect to see reduced transfer rates with the amount of traffic going through the Cluster Interconnect. The only issue is alua needs to be turned on for the iGroup on the storage system, the easiest way to do this is NetApp System Manager. Can also be done through command line:

igroup set <iGroup Name> alua on

Other than that your config looks sound and I would need certain stats from the Filer to identify any issues on that side. 

let me know how you get on.

Public