Network Storage Protocols Discussions

Interface bonding setup correctly?


I'm trying to bond two interfaces together on my filer, to create a 2Gb link.  They connect to a Cisco 3750.  On the Cisco side I setup a port channel, and added the two physical ports to it.  On the NetApp side I added those two interfaces to a vif.  On the Cisco side both ports show up and active, and traffic is flowing just fine.

But I'm noticing that I often spike 1 physical interface to nearly 100% (960Mbs), and the other interface never shows any traffic.  It seems like I'm peaking at 1Gbs, and the second link is never being used.

Is there anything I should look at to confirm my port channel is setup correctly for load balancing? Here's my Cisco config:

interface Port-channel1
description netapp-vif01
switchport mode access
interface Port-channel2
description netapp-vif02
switchport mode access

interface GigabitEthernet1/0/2
description netapp-e0a
switchport mode access
channel-group 1 mode active
interface GigabitEthernet1/0/3
description netapp-e0b
switchport mode access
channel-group 1 mode active
interface GigabitEthernet1/0/4
description netapp-e0c
switchport mode access
channel-group 2 mode active
interface GigabitEthernet1/0/5
description netapp-e0d
switchport mode access
channel-group 2 mode active



Re: Interface bonding setup correctly?


I beleive some of the issue might be that interfaces on the netapp, by default are setup as MULTI.  on your switch, I beleave the default port linking configuration is LACP.

You may need to edit your /etc/rc file:

you will see a line for every vif you created and it will be like

vif create multi........

change the multi to lacp (only if your switch is setup lacp), save the /etc/rc file changes and restart your netapp.

You can edit the /etc/rc file by creating a drive mapping to you vol0    net use z: \\<netappdevice>\c$ for the root of vol0 use notepad to edit the /etc/rc file (make a copy first)

Re: Interface bonding setup correctly?


Thanks phitchcock.  But I checked my /etc/rc file and it looks correct.  Both my switch and netapp should be using LACP:


#Regenerated by registry Wed May 12 10:09:04 EDT 2010
#Auto-generated by setup Tue May 11 05:57:26 GMT 2010
hostname nas
vif create lacp vif01 -b port e0a e0b
vif create lacp vif02 -b port e0c e0d

From Cisco switch

show etherchannel 1 port-channel
                Port-channels in the group:

Port-channel: Po1    (Primary Aggregator)


Age of the Port-channel   = 55d:11h:58m:20s
Logical slot/port   = 10/1          Number of ports = 2
HotStandBy port = null
Port state          = Port-channel Ag-Inuse
Protocol            =   LACP

Ports in the Port-channel:

Index   Load   Port     EC state        No of bits
  0     00     Gi1/0/2 Active             0
  0     00     Gi1/0/3 Active             0

I was also reading this blog which seems to indicate I have things setup correctly.  But down on the comments someone mentions "port-channel load-balance src-dst-mac".  Another article mentions using "port-channel load-balance src-dst-ip".  I don't have either defined.  Maybe that's the problem.  But not sure of which to use, and the impact.

Re: Interface bonding setup correctly?


It does appear that netapp defaults ip for load balancing, so setting the cisco up for src-dst-ip would work as cisco default is mac.

Peter L. Hitchcock

EHC Financial Services, L.L.C.

Evergreen Healthcare, L.L.C.

LAN Support Specialist

IS / IT Helpdesk and Applications Support <> <>

Cell: 503-298-1861

Ph. 360-892-6628

Fax 360-816-8167

We make a living by what we get, we make a life by what we give. <>

Sir Winston Churchill

Re: Interface bonding setup correctly?


Keep in mind that even when you have it set up correctly, if you have pushing data from one host, you will only use one link.  That's how EtherChannel works.  It hashes on MAC or IP addresses.

EtherChannel is best for multiple clients rather than just a few.

View solution in original post

Re: Interface bonding setup correctly?


Thank you both, very helpful.

Adding port-channel load-balance src-dst-ip did indeed help -- as it was previously load balancing the destination, which was a single IP.  Now balancing the source spreads the load of our source hosts evenly.

But it's also true that if a single host is causing the spike, it will use a single link.

All Community Forums