Subscribe

FAS2050 performace issues?

I am having (I think) performance issues with new FAS 2050. Could someone please take a look at my setup to validate it and possibly point out any mistakes I could have made… Sorry for a lenghty post...

Disks and aggregates:

It’s an active/active cluster fully loaded with 300G 15K RPM SAS disks. The disks are equally split in 2 aggregates between the heads with 9 disks allocated to RAID-DP and one disk set as a spare for each head (20 disks total).

Network:

Each head also has an additional dual port gigabit NIC. All ports on each head are trunked together in LACP vif, and connected to stacked Dell PowerConnect 6248 switch in a cross-stack LACP LAGs (hashing is set up as Source/Destination IP and source/destination TCP/UDP Port for both LAGs). Here’s an example of one of the vifs status:

fas2050-1b> vif status vifb

default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'
vifb: 4 links, transmit 'IP Load balancing', VIF Type 'lacp' fail 'default'
          VIF Status     Up      Addr_set
         up:
         e1b: state up, since 28May2010 12:50:23 (89+20:07:43)
                 mediatype: auto-1000t-fd-up
                 flags: enabled
                 active aggr, aggr port: e1b
                 input packets 39740469, input bytes 43762122912
                 input lacp packets 258765, output lacp packets 258904
                 output packets 289610, output bytes 35591320
                 up indications 4, broken indications 0
                 drops (if) 0, drops (link) 0
                 indication: up at 28May2010 12:50:23
                         consecutive 0, transitions 4
         e1a: state up, since 28May2010 12:50:23 (89+20:07:43)
                 mediatype: auto-1000t-fd-up
                 flags: enabled
                 active aggr, aggr port: e1b
                 input packets 29814366, input bytes 26751355374
                 input lacp packets 258765, output lacp packets 258904
                 output packets 49435851, output bytes 42793459168
                 up indications 4, broken indications 0
                 drops (if) 0, drops (link) 0
                 indication: up at 28May2010 12:50:23
                         consecutive 0, transitions 4
         e0b: state up, since 28May2010 12:50:23 (89+20:07:43)
                 mediatype: auto-1000t-fd-up
                 flags: enabled
                 active aggr, aggr port: e1b
                 input packets 12253073, input bytes 1915140440
                 input lacp packets 258762, output lacp packets 258902
                 output packets 7894601, output bytes 4554546490
                 up indications 4, broken indications 0
                 drops (if) 0, drops (link) 0
                 indication: up at 28May2010 12:50:23
                         consecutive 0, transitions 4
         e0a: state up, since 28May2010 12:50:23 (89+20:07:43)
                 mediatype: auto-1000t-fd-up
                 flags: enabled
                 active aggr, aggr port: e1b
                 input packets 14268459, input bytes 3068823671
                 input lacp packets 258765, output lacp packets 258904
                 output packets 40286545, output bytes 18936230853
                 up indications 4, broken indications 0
                 drops (if) 0, drops (link) 0
                 indication: up at 28May2010 12:50:23
                         consecutive 0, transitions 4

fas2050-1b> ifconfig vifb
vifb: flags=4948043<UP,BROADCAST,RUNNING,MULTICAST,TCPCKSUM,NOWINS> mtu 1500
        inet 172.16.16.223 netmask 0xffffff00 broadcast 172.16.16.255
        inet 172.17.17.224 netmask 0xffffff00 broadcast 172.17.17.255
        partner vifa (not in use)
        ether 02:a0:98:11:5c:4c (Enabled virtual interface)

Connecting host (Windows 2008R2, 2 port intel gigabit NIC set up as a team connected to a cross-stack LAG on the switches) and set with 172.16.16.225 IP. I tried LACP or GEC setup with very similar results.  Jumbo frames are not enabled (as the same switches are being actively used in production to connect vmware esx hosts and couple of other iSCSI SAN systems). If moving to Jumbo frames is highly recommended I could probably pull it off only if it can be enabled on the FAS and the switches without affecting other systems that aren’t set up for jumbo frames yet.

Problem:

With a LUN attached to the host via MS iSCSI initiator and initialized as GPT, NTFS with 32K allocation unit size, I am getting 35-50MBps transfers (at the best).

Just for the heck of it I’ve set up an additional single port with the IP 172.17.17.225, added a connection and changed the MCS policy from round-robin to weighed with the priority given to the single port connection. With that the speed has INCREASED to about 60MBps. What gives?  Does anyone have similar setups? Any suggestions at all (throwing switches out of the window and replacing them with CISCO which is cost prohibitive for us at this point) on what can be done to tune the performance? Is there a way to achieve the transfer speed exceeding 1Gbps for transfers to a single LUN between multi-port windows 2008 R2 host and a target on quad-port FAS2050 head?

Re: FAS2050 performace issues?

I am actually seeing the exact same issue on a 2040 (7.3.3) with the new Dell PowerConnect 7024's. I have been unable to progress on it, the performance is just horrible in this configuration.