Network and Storage Protocols

Problem while configuring VIF between NAS Server and Cisco Switch

JMDWQJMDWQ
3,580 Views

Hi,

Currently I am trying to configure VIF between server and network switch. (4 links in aggregation)

*** status on server ***

default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'

vf1: 4 links, transmit 'IP Load balancing', VIF Type 'multi_mode' fail 'default'

     VIF Status    Up     Addr_set

all interfaces are Up now.

*** status on switch ***

Group  Port-channel  Protocol    Ports
------+-------------+-----------+---------------------------------
1      Po1(SU)          -        Gi2/1(P)    Gi2/2(P)    Gi2/3(P)
                                 Gi2/4(P)

all interfaces are Up now. (including ports and port-channel)

The newtork connection is up now with traffic in 4 giga ports.

What i am wondering is the reason why speed is only 1 giga instead of 4 giga as i thought during my copying data in test (serveral servers trying to connect the NAS Server in the same time)... Can anyone help me with this issue?

Your great help will be really appreciated!

Regards,

Brian

4 REPLIES 4

rwelshman
3,580 Views

Where are you seeing the 1 GBps throughput? On the VIF on the filer?

Where are you checking the throughput?

JMDWQJMDWQ
3,580 Views

Hi Riley,

Thank for your reply.

It's on the filer by typing sysstat(in attachment). What we did for test was to let some servers copying data from the NAS Server and to have a check.

Brian

seacliff1
3,580 Views

As far as I know, netapp can't use all for nics as one big nic and aggregate all bandwith.

So, you'll have at most 1GB\s throughtput per server.

As well, you are using IP Load Balancing, so netapp will split your overall IP address in 4 (you're using 4 port) and blindly use the appropriate port for each IP

P1     P2     P3     P4

x.1     x.2     x.3     x.4

x.5     x.6     x.7     x.8

... and so on.

so that means IP x.1 and x.5 shares the same port, and the same throughput even if all other ports are free.

For your test, I guess you would need to make sure that you're using different port for each server of your test.

vince_labua
3,580 Views

Aggregating links (MM VIF / ifgrp) is designed to provide more bandwidth. What is the protocol you are using?

Use this to confirm what you are using on the switch for load balancing.

switch#show etherchannel load-balance

You can use this to set it to what is necessary for the nature of your traffic.

switch(config)#port-channel load-balance src-dst-ip

Your MAC on the MultiMode VIF will be derived from a NIC on the system... 02:........

You can have multiple IP addresses on this VIF however.  Therefore, you need to decide which is best for the environment.

If you are using iSCSI, stay away from using a MM VIF.  The way that iSCSI handles sessions at the protocol level, this will only create more of a headache and jeopardize performance.

If your source of the data stream that you are measuring performance with is only a 1gig link, you are not going to be able to fill the 4gig VIF with enough traffic.  You would need about 8 hosts to see over 2.5gig of traffic on the vif show.  This is accounting for overhead.

Some References for you:

http://now.netapp.com/NOW/knowledge/docs/ontap/rel724/html/ontap/nag/7vifs3.htm#1195956

https://kb.netapp.com/support/index?page=content&id=3011657 (see link to Network Administration Guide)

http://blog.scottlowe.org/2007/06/13/cisco-link-aggregation-and-netapp-vifs/

Public