ONTAP Hardware

vSeries Path Selection to 3rd Party Array

MRJORDANG
4,693 Views

Hello,

We are experiencing some performance issues with data hosted on an aggregate that virtualizes a 3rd party array.     After working with NetApp support, the conclusion was that the disk from the 3rd party array is not keeping up.   That prompted the opening of a case with EMC.  EMC quickly found that SPA on the EMC array is being far more utilized than SPB.    They recommended balancing the load between service processors.

As far as I can tell, there is no way to manually configure our v3240 to use specific paths to 3rd party storage.  Is this true?    If not, how would I go about configuring our v3240 to use a path to a 3rd party LUN via SPB versus SPA?

You can see from the output of storage show disk -p and aggr status -r  that we are overutilizing the path through sanswitch1, port 3/10 which goes to SPA.   I'd like to swap the primary path with the secondary path so more LUNs will be accessed through sanswitch0, port 5/7 which goes to SPB.

# ssh filer storage show disk -p

PRIMARY                PORT  SECONDARY              PORT SHELF BAY

---------------------- ----  ---------------------- ---- ---------

sanswitch1:3-10.0L0  -    sanswitch0:5-7.0L0   -     -    -

sanswitch1:3-10.0L1  -    sanswitch0:5-7.0L1   -     -    -

sanswitch1:3-10.0L2  -    sanswitch0:5-7.0L2   -     -    -

sanswitch1:3-10.0L3  -    sanswitch0:5-7.0L3   -     -    -

sanswitch1:3-10.0L4  -    sanswitch0:5-7.0L4   -     -    -

sanswitch1:3-10.0L5  -    sanswitch0:5-7.0L5   -     -    -

sanswitch1:3-10.0L6  -    sanswitch0:5-7.0L6   -     -    -

sanswitch1:3-10.0L7  -    sanswitch0:5-7.0L7   -     -    -

# ssh filer aggr status -r clariiion001_ssd_001
Aggregate clariion001_ssd_001 (online, raid0) (block checksums)
  Plex /clariion001_ssd_001/plex0 (online, normal, active, pool0)
    RAID group /clariion001_ssd_001/plex0/rg0 (normal, block checksums)

      RAID Disk Device                  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------                  ------------- ---- ---- ---- ----- --------------    --------------
      data      sanswitch1:3-10.0L0  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L1  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L2  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L3  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L4  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L5  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L6  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:3-10.0L7  5b    -   -          0   LUN   N/A 264333/541353984  267003/546824096

    RAID group /clariion001_ssd_001/plex0/rg1 (normal, block checksums)

      RAID Disk Device                  HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)
      --------- ------                  ------------- ---- ---- ---- ----- --------------    --------------
      data      sanswitch0:3-7.0L0   5a    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:6-10.0L1  5d    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch0:3-7.0L2   5a    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch1:6-10.0L3  5d    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch0:3-7.0L4   5a    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch0:3-7.0L5   5a    -   -          0   LUN   N/A 264333/541353984  267003/546824096
      data      sanswitch0:3-7.0L6   5a    -   -          0   LUN   N/A 264333/541353984  267003/546824096

6 REPLIES 6

thomas_glodde
4,693 Views

plz try a "storage load balance" and then check the output again.

NetApp only allowes 2 pathes to a disk and will use A/P for a given path. It will try to load balance tho, eg 10 LUNs over 1a and 10 LUNs over 1b or similar. It might mess up in path down conditions etc leading to the need to do a "storage load balance" again.

MRJORDANG
4,693 Views

"storage load balance" didn't change anything with the primary and secondary paths.   Here is the output of storage load show.  Initiator 5b's path is to SPA.   I'd like to move some of that traffic to one of the other initiators.

Note:  Those LUN's with 0's for their stats are owned by the other head in our HA pair.

filer> storage load show
Initiator port: 5a connected to sanswitch0:2-24.
  LUN                          Serial #       Target Port    %I/O  I/O (blocks)
    6  6006016098022700C8E68DA8B6FFDE11  500601644460281a     20%       544
    4  6006016098022700C6E68DA8B6FFDE11  500601644460281a     20%       543
    2  6006016098022700C4E68DA8B6FFDE11  500601644460281a     19%       540
    5  6006016098022700C7E68DA8B6FFDE11  500601644460281a     19%       540
    0  6006016098022700C2E68DA8B6FFDE11  500601644460281a     19%       538

Initiator port: 5b connected to sanswitch1:2-24.
  LUN                          Serial #       Target Port    %I/O  I/O (blocks)
    0  6006016098022700BAE68DA8B6FFDE11  500601654460281a     12%       550
    7  6006016098022700C1E68DA8B6FFDE11  500601654460281a     12%       546
    2  6006016098022700BCE68DA8B6FFDE11  500601654460281a     12%       544
    5  6006016098022700BFE68DA8B6FFDE11  500601654460281a     12%       544
    6  6006016098022700C0E68DA8B6FFDE11  500601654460281a     12%       541
    3  6006016098022700BDE68DA8B6FFDE11  500601654460281a     12%       541
    1  6006016098022700BBE68DA8B6FFDE11  500601654460281a     12%       540
    4  6006016098022700BEE68DA8B6FFDE11  500601654460281a     12%       535
    1  6006016086402600B3DDE8B50B64E211  50060167446019d2      0%         0
    0  6006016086402600B2DDE8B50B64E211  50060167446019d2      0%         0
    3  6006016086402600B5DDE8B50B64E211  50060167446019d2      0%         0
    2  6006016086402600B4DDE8B50B64E211  50060167446019d2      0%         0

Initiator port: 5d connected to sanswitch1:5-48.
  LUN                          Serial #       Target Port    %I/O  I/O (blocks)
    1  6006016098022700C3E68DA8B6FFDE11  5006016d4460281a     50%       542
    3  6006016098022700C5E68DA8B6FFDE11  5006016d4460281a     49%       539
    7  6006016098022700C9E68DA8B6FFDE11  5006016d4460281a      0%         0
    0  6006016086402600B6DDE8B50B64E211  5006016f446019d2      0%         0
    2  6006016086402600B8DDE8B50B64E211  5006016f446019d2      0%         0
    1  6006016086402600B7DDE8B50B64E211  5006016f446019d2      0%         0
    3  6006016086402600B9DDE8B50B64E211  5006016f446019d2      0%         0

aborzenkov
4,693 Views

CLARiiON is active/passive, so I expect active paths reflect LUN ownership. Did you check LUN-SP assignment?

MRJORDANG
4,693 Views

We did check that.   On the Clariion, we can assign a default owner, SPA or SPB, on a per LUN basis.   The default ownership of LUNs was originally split equally between SPA and SPB.    I believe the host (v3240) dictates which path is actually used.  Since the v3240 perferred the path to SPA over SPB for whatever reason, many of the LUN's originally owned by SPB were trespassed to SPA.    That is where we are now - SPA owns most of the LUNs and is handling most of the traffic.

We can trespass the LUNs back to SPB on the Clariion however we believe the LUNs would just be trespassed back to SPA if the v3240 didnt change its primary path. 

thomas_glodde
4,693 Views

Maybe reassign the EMC SPs and then storage load balance? Havent checked the official recommendations when it comes to V-Series on EMC Array (had only HP EVAs in the past), does it mention anything?

MRJORDANG
4,693 Views

I'll double check but I dont think it mentioned anything.  I have a case open with NetApp support as well.  No response yet...I'll update this thread when I get this resolved.

Public