Network and Storage Protocols

load balancing lacp vif problem

netapp_fh
14,809 Views

hi guys, i hope someone of you can help me with my problem or maybe i've some errors in reasoning the docs / howtos

 

following setup:

 

1x FAS 3140 / 2 heads active/active mode

ontap 8.0.1 7-mode

2x 4Port Gigabit nics + onboard nics

 

cisco 6509 core switch

 

backupserver ibm3550 with staging storage + 1x4port Gigabit nic

 

 

fas3140 connected with 4x1GB lanes to 6509

ibm-server connected with 4x1GB lanes to 6509

 

 

config on netapp rc file:

 

#Regenerated by registry Thu Aug 05 19:42:03 GMT+02:00 2010

#Auto-generated by setup Wed Apr 29 12:25:34 CEST 2009

hostname STORE

vif create lacp vif0 -b port e4a e4b e4c e4d

vif create lacp vif1 -b port e0a e0b

vif create single vif2 vif0 vif1

vlan create vif2 110

vif favor vif0

ifconfig vif2-110 `hostname`-vif2-110 netmask 255.255.255.0 partner vif2-110

ifconfig e0M `hostname`-e0M netmask 255.255.255.0 partner e0M

route add default 10.10.10.1 1

routed on

vif create lacp vif3 -b port e3a e3b

vif create lacp vif4 -b port e3c e3d

vif create single vif5 vif3 vif4

vif favor vif3

ifconfig vif5 10.0.81.101 netmask 255.255.255.0 partner vif5

options dns.enable on

options nis.enable off

savecore

 

config on 6509:

 

port-channel load-balance src-dst-mixed-ip-port

 

interface Port-channel120

description Backup

switchport

switchport access vlan 820

switchport trunk encapsulation dot1q

switchport mode access

switchport nonegotiate

lacp max-bundle 4

!

interface GigabitEthernet1/12

description Backup

switchport

switchport access vlan 820

switchport mode access

switchport nonegotiate

channel-protocol lacp

channel-group 120 mode active

!

interface GigabitEthernet1/31

description Backup

switchport

switchport access vlan 820

switchport mode access

switchport nonegotiate

channel-protocol lacp

channel-group 120 mode active

!

interface GigabitEthernet2/1

description Backup

switchport

switchport access vlan 820

switchport mode access

switchport nonegotiate

channel-protocol lacp

channel-group 120 mode active

!

interface GigabitEthernet2/11

description Backup

switchport

switchport access vlan 820

switchport mode access

switchport nonegotiate

channel-protocol lacp

channel-group 120 mode active

!

interface Port-channel100

description Storage-Head0-Act

switchport

switchport trunk encapsulation dot1q

switchport trunk allowed vlan 110

switchport mode trunk

lacp max-bundle 4

!

interface GigabitEthernet1/15

description Storage-Head0-E4D

switchport

switchport trunk allowed vlan 110

switchport mode trunk

spanning-tree portfast edge trunk

channel-protocol lacp

channel-group 100 mode active

!

interface GigabitEthernet1/23

description Storage-Head0-E4A

switchport

switchport trunk allowed vlan 110

switchport mode trunk

spanning-tree portfast edge trunk

channel-protocol lacp

channel-group 100 mode active

!

interface GigabitEthernet2/13

description Storage-Head0-E4B

switchport

switchport trunk allowed vlan 110

switchport mode trunk

spanning-tree portfast edge trunk

channel-protocol lacp

channel-group 100 mode active

!

interface GigabitEthernet2/15

description Storage-Head0-E4C

switchport

switchport trunk allowed vlan 110

switchport mode trunk

spanning-tree portfast edge trunk

channel-protocol lacp

channel-group 100 mode active

!

 

config on ibm server:

 

DEVICE=bond0

IPADDR=10.80.20.18

NETMASK=255.255.255.0

GATEWAY=10.80.20.1

DNS1=10.10.9.33

DNS2=10.10.9.34

ONBOOT=yes

MASTER=yes

BONDING_OPTS="mode=4 lacp_rate=1 miimon=100 xmit_hash_policy=layer3+4"

 

 

etherchannels with lacp on 6509 up and working for both devices...

 

but: when i start a backup-job (ndmp) on backupserver (netbackup 7.5 with ndmp option)

the aggregate speed wont exceed approx. 1GB for one ndmp stream...

 

we did serveral tests / changes with the bonding / lacp settings, but none of them managed to get more

speed than 1GB

 

is there maybe a missmatch with the lacp settings? cause we are dumping from one src-ip to on dst-ip?

 

or is the ndmp stream limited?

 

and yes, i've read

 

https://library.netapp.com/ecmdocs/ECMM1249824/html/nag/frameset.html

 

 

maybe you can help me or give me some answers

 

with regards, fabian

9 REPLIES 9

sam_kimery
14,809 Views

LACP aggregates the links to a single logical link, but it doesn't provide more than a single physical members bandwidth to an individual host (e.g. aggregating 4x1G ethernet will only provide 1G max bandwidth for a single host). The additional bandwidth gains are provided by the load balancing of the servers/client across all of the links that make up the aggregate (see the '-b' option of the vif command). The other benefit of LACP is redundancy -- you can lose member links and not lose connectivity.

- Sam

mahesm
14,809 Views

Once the hashing algorithm choses a particular port for a stream, all the data for that stream goes through the very same port.

strempel
14,809 Views

Hi Fabian,

only one NDMP IP Session will never use 4 Gigabit Links. You have SRC-IP to Dst-IP, so 100MB/Sec is your limit.

Using 4 NDMP Sessions with only one IP , will again give you 100 MB/Sec if you run "-b ip" and SRC-DST IP on Cisco Switch. Switching to Port Based Load Balancing could help on Cisco and FAS site, as every session will have a different SRC and DST Port.

Cheers,

Peter

netapp_fh
14,809 Views

hi strempel,

what do you mean with ..... Switching to Port Based Load Balancing could help on Cisco and FAS site.....?

on 6509 we have

port-channel load-balance src-dst-mixed-ip-port

and on fas

vif create lacp vif0 -b port e4a e4b e4c e4d  (not -b ip)

this should mix src+dst-ip + port or am i wrong?

from the ios docs we have several modes:

  dst-ip             Dst IP Addr
  dst-mac            Dst Mac Addr
  dst-mixed-ip-port  Dst IP Addr and TCP/UDP Port
  dst-port           Dst TCP/UDP Port
  mpls               Load Balancing for MPLS packets
  src-dst-ip         Src XOR Dst IP Addr
  src-dst-mac        Src XOR Dst Mac Addr

  src-dst-mixed-ip-port  Src XOR Dst IP Addr and TCP/UDP Port

  src-dst-port       Src XOR Dst TCP/UDP Port
  src-ip             Src IP Addr
  src-mac            Src Mac Addr
  src-mixed-ip-port  Src IP Addr and TCP/UDP Port
  src-port           Src TCP/UDP Port

i've also forgotten to mention that we have 2x 6509, we divided the services by the heads, head 1  --> (store) cifs, iscsi, fc  head 2 (hook) --> nfs

both heads are connected to both 6509, but ndmp dumps are only done @ store, so the second 6509 and head hook should not be involved in this scenario

config head2 hook:

#Auto-generated by setup Wed Apr 29 11:58:41 CEST 2009

hostname HOOK

vif create lacp vif0 -b port e4a e4b e4c e4d

vif create lacp vif1 -b port e0a e0b

vif create single vif2 vif0 vif1

vlan create vif2 110

vif favor vif0

ifconfig vif2-110 `hostname`-vif2-110 netmask 255.255.255.0 partner vif2-110

ifconfig e0M `hostname`-e0M netmask 255.255.255.0 partner e0M

route add default 10.10.10.1 1

routed on

vif create lacp vif3 -b port e3a e3b

vif create lacp vif4 -b port e3c e3d

vif create single vif5 vif3 vif4

vif favor vif3

ifconfig vif5 10.0.81.102 netmask 255.255.255.0 partner vif5

options dns.enable on

options nis.enable off

savecore

strempel
14,809 Views

Sorry, didn`t get that you`re already using "port".

As mentioned, one session will not use all links, it will only use one link. You have to schedule more sessions at the same time to utilize more then one link.

TMACSEVERN
14,809 Views

(Why do you put each VIF on a NIC? you should spread each NIC across physical NICs to limit vif failure)

(you should not partner the e0M interface)

Well, the first thing stat stands out, is how you create your VIFs on your NetApp:

hostname STORE

ifgrp create lacp vif0 -b ip e4a e4b e4c e4d

ifgrp create lacp vif1 -b ip e0a e0b

ifgrp create single vif2 vif0 vif1

ifgrp favor vif0

vlan create vif2 110

ifconfig vif2-110 `hostname`-vif2-110 netmask 255.255.255.0 partner vif2-110

ifconfig e0M `hostname`-e0M netmask 255.255.255.0

route add default 10.10.10.1 1

routed on

ifgrp  create lacp vif3 -b ip e3a e3b

ifgrp  create lacp vif4 -b ip e3c e3d

ifgrp  create single vif5 vif3 vif4

ifgrp  favor vif3

ifconfig vif5 10.0.81.101 netmask 255.255.255.0 partner vif5

options dns.enable on

options nis.enable off

savecore

On your switch, you might want to consider going to the default of source-dest-ip for your balancing

Not sure why all the extras in your stanzas though. Mine list a lot less (does not spit out the default entries).

You also might want to check to see if any errors are being generated at the end points and/or on the switch. I had a case where a bad wire/fiber was causing headaches

magpiper2
14,809 Views

Given what has already been stated about LACP is true. Regardless of what your config is. The nature of aggregation is limited usually to a port per host.

An obvious question I have: Is why you using NDMP over ethernet instead of fibre channel? I assume you don't have FC capability. 

Should the backup window be too great. I recommend you switch to fibre channel. As an example: Our backup windows went from 45+ hours (ethernet)  to 9 hours (NDMP FC). I forget how many TB's of data it was but it's irrelevant.

Best regards,

netapp_fh
14,809 Views

hi magpiper2

yes, true, ndmp via ethernet

fc is in the field, but only for virtualization, not for backup.....difficult to explain, briefly speaking ....no money, doesnt matter

i thought i could reduce the backup windows....

so is there no chance to increase one ndmp stream above the gigabit limit of on physical interface?

would it help to assign more ip adresses to the vif0 ? and starting several ndmp jobs against all adresses?

with regards..

daniel_kaiser
14,809 Views

hi netapp_fh,

Did you ever find a solution.  We're running into a similar problem...slow ndmp backups using backupexec.  The backup is only using one 1gb interface in the vif. 

Did you try assigning ip aliases to the vif and running multiple ndmp jobs at the same time.

Public