EF & E-Series, SANtricity, and Related Plug-ins

Not all LUN's are visible on HBA's

sanadmin_do
6,527 Views

We use four Linux servers, with SLES 11 SP4, in our backup environment. Two servers are connected via FC with two E2700s (FW 8.40.30.03). The two other servers are connected via FC with two E2800 (FW 11.50). Our problem: Not all LUN's are recognized by the built-in HBA's. Despite restarting all the systems, the driver and the FW of the HBA's update remains at the problem.

 

Here's an example:

tsm61_cdata_002 (3600a098000af37c20000029a5a24aa4f) dm-66 NETAPP,INF-01-00
size=3.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=14 status=active
| |- 7:0:2:15 sdhi 133:128 active ready running
| `- 9:0:3:15 sdih 135:16  active ready running
`-+- policy='service-time 0' prio=9 status=enabled
  |- 7:0:3:15 sdgj 131:240 active ready running
  `- 9:0:2:15 sdel 128:208 active ready running

 

tsm61_data_001 (3600a098000af32d60000026c5a24ab38) dm-7 NETAPP,INF-01-00
size=3.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=14 status=active
| |- 7:0:0:15 sdbo 68:32   active ready running
| `- 9:0:0:15 sdq  65:0    active ready running
`-+- policy='service-time 0' prio=9 status=enabled
  `- 7:0:1:15 sdcn 69:176  active ready running

 

tsm71_data_010 (3600a098000af37c2000002b35a24ac3c) dm-70 NETAPP,INF-01-00
size=3.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=14 status=active
| |- 9:0:3:12 sdie 134:224 active ready running
| `- 7:0:2:12 sdhf 133:80  active ready running
`-+- policy='service-time 0' prio=9 status=enabled
  |- 7:0:3:12 sdgg 131:192 active ready running
  `- 9:0:2:12 sdei 128:160 active ready running


tsm71_cdata_005 (3600a098000af32d60000027b5a24ad7c) dm-20 NETAPP,INF-01-00
size=3.0T features='4 queue_if_no_path pg_init_retries 50 retain_attached_hw_handle' hwhandler='1 rdac' wp=rw
|-+- policy='service-time 0' prio=14 status=active
| |- 7:0:0:5  sdbe 67:128  active ready running
| `- 9:0:0:5  sdg  8:96    active ready running
`-+- policy='service-time 0' prio=9 status=enabled
  `- 7:0:1:5  sdcd 69:16   active ready running

 

Does anyone have any idea where the mistake could be?

1 ACCEPTED SOLUTION

sanadmin_do
6,101 Views

The issue is solved. There was a configuration error in our multipath.conf: A regular expression for matching the first SCSI-Device "sda" was also matching the devices "sdaa", "sdab", "sdac" and so on.

View solution in original post

5 REPLIES 5

Zubrania
6,488 Views

HI

 

Either zoning  or lun mapping  might  be the problem 

 

 

sanadmin_do
6,484 Views

We've reviewed zoning several times, including OnCommand Insight. This is fine. It can not be the LUN mapping, nothing has changed. The problem arose after the storage systems were restarted due to the FW update.

Zubrania
6,280 Views

Hello

Have you sorted out the issue ?

 

sanadmin_do
6,130 Views

@Zubrania: No, we still have the problem. In the meantime, we tried to solve it with NetApp 1st and 2nd Level Support, without success. The zoning we have already checked several times, there everything is okay. We have also checked the settings on the part of the Linux server and adjusted according to specifications of NetApp, no solution to the problem.

We're thankful for any hint.

sanadmin_do
6,102 Views

The issue is solved. There was a configuration error in our multipath.conf: A regular expression for matching the first SCSI-Device "sda" was also matching the devices "sdaa", "sdab", "sdac" and so on.

Public