Network and Storage Protocols

slowly network connection

discretixit
22,634 Views

hi

i have netapp FAS 3020 and cisco 4560 they connect via 3 cable each 1 GB on etherchannel (no lacp) and when i copy a 10 G file it's take ~10 min ,the destination is win server 2003 also connect to 4560 with 1G,what the problem ?and what  is the best way to connect NepApp to ciscp switch?

32 REPLIES 32

danbrown1
19,802 Views

You've got a problem there someplace - 10GB in 10 minutes is, by my calculation, 10240MB/600seconds == 17MB/second, which is < 15% of what should be possible with a 1 gigabit ethernet. See: http://en.wikipedia.org/wiki/List_of_device_bandwidths

There's a lot of places you could start looking for the cause. I'd start by making sure the network settings are all koser. Check duplex and speed settings on all ports, etc. Maybe something is set for fast ethernet (100Mbps?)

NetApp Network best practices here: https://now.netapp.com/Knowledgebase/solutionarea.asp?id=kb8454

discretixit
19,802 Views

hi

thanks,i checked the setting in server and i have Auto-negotiate 1000 Mbps

danielpr
18,886 Views

1Gig interface is a shared medium on the ethernet layer. I think the traffic should be high in this case thats reason for the delay.

Thanks

Daniel

discretixit
18,886 Views

hi

i didn't understand could you please explain ?

danielpr
18,886 Views

You can check the wiki here for the shared medium details. http://en.wikipedia.org/wiki/Ethernet#CSMA.2FCD_shared_medium_Ethernet

The collisions reduce throughput often, in your case you may be trying to use the same interface for all the Data/Management traffic.

You can have a dedicated interface for the high through put.

Please check the windows machine if the speed is 1.0Gbps for connected interface status.

Thanks

Daniel

danbrown1
19,802 Views

Have you checked the settings on the switch for both the NetApp and the Windows server?

If you haven't already, you may want to draw a picture of the setup (even if it is as simple as it sounds) and then start checking each connection, each link, each port, each setting, and working to eliminate possible problem areas. Be objective, don't make assumptions. If the settings are all set correctly, start looking deeper - are there errors? Collisions? Dropped Packets? Other bottlenecks? Is the switch overloaded? Are CPU's overloaded? Etc.

I recently helped a site who having trouble reaching their RLM ports on their filers - everything appeared to be setup correctly on the NetApp, routing, ip-addresses, etc. Turned out that a network administrator had set all of the ports on their Cisco switches to locked at 1000Gbps, full duplex. RLM ports are only 100Mbps (fast ethernet) ports, and so couldn't talk to the switch at 1000Mbps. My point here is you can't assume things will work, just because the should. The fix was simple, but it required looking in the right place to figure out what needed to happen. It took a process of elimination to identify that right place to look.

discretixit
19,802 Views

hi

i was check cpu --->no problem (peak 46%)

the cisco port set to auto speed & duplex and Port Fast set to disabled

chriskranz
19,802 Views

You say the link aggregate (VIF) is over 3 ports? I saw an issue similar to this the other day where a VIF created with 3 ports going into a Cisco switch caused a problem. If this was changed to 1, 2 or 4 ports, there was no issue at all, and everything worked as expected.

Try simply unplugging one of the ports (so you have 2 active) and running the tests again.

discretixit
19,802 Views

hi

I unplugging one of the ports and try to copy the file --- ? the same problem ...

chriskranz
18,636 Views

If you run "ifstat -a" on the filer, you'll get a print out of all the stats on the various interfaces. Amongst other things, this can show you errors and also show if you have a biased to any of the interfaces (due to the load balancing algorithms, or possible errors with that).

Have a look through the output, or possibly post it here and I can cast my eyes over it for you.

The other thing to try is reduce the VIF to a single port. This will almost totally rule out any issues with the teaming used on either side (NetApp or Cisco).

Do you know if the MTU has been changed at all (default is 1500, but often is changed to 9000).

discretixit
18,636 Views

ifstat -a

-- interface  e0a  (187 days, 9 hours, 24 minutes, 15 seconds) --

RECEIVE
Frames/second:      40  | Bytes/second:     7874  | Errors/minute:       0
Discards/minute:     0  | Total frames:    33266m | Total bytes:     12762g
Total errors:        0  | Total discards:    155  | Multi/broadcast:     0
No buffers:        155  | Non-primary u/c:     0  | Tag drop:            0
Vlan tag drop:       0  | Vlan untag drop:     0  | CRC errors:          0
Runt frames:         0  | Fragment:            0  | Long frames:         0
Jabber:              0  | Alignment errors:    0  | Bus overruns:        0
Queue overflows:     0  | Xon:             14266k | Xoff:            14266k
Jumbo:               0  | Reset:               0  | Reset1:              0
Reset2:              0
TRANSMIT
Frames/second:     543  | Bytes/second:      655k | Errors/minute:       0
Discards/minute:     0  | Total frames:    36769m | Total bytes:     14165g
Total errors:        0  | Total discards:      0  | Multi/broadcast: 31691
Queue overflows:     0  | No buffers:          0  | Max collisions:      0
Single collision:    0  | Multi collisions:    0  | Late collisions:     0
Timeout:             0  | Xon:                 0  | Xoff:                0
Jumbo:               0
LINK_INFO
Current state:       up | Up to downs:        15  | Auto:                on
Speed:            1000m | Duplex:            full | Flowcontrol:       full


-- interface  e0b  (187 days, 9 hours, 24 minutes, 15 seconds) --

RECEIVE
Frames/second:       0  | Bytes/second:        0  | Errors/minute:       0
Discards/minute:     0  | Total frames:    15157m | Total bytes:       999g
Total errors:        0  | Total discards:      0  | Multi/broadcast:   182k
No buffers:          0  | Non-primary u/c:     0  | Tag drop:            0
Vlan tag drop:       0  | Vlan untag drop:     0  | CRC errors:          0
Runt frames:         0  | Fragment:            0  | Long frames:         0
Jabber:              0  | Alignment errors:    0  | Bus overruns:        0
Queue overflows:     0  | Xon:                12  | Xoff:               13
Jumbo:               0  | Reset:               0  | Reset1:              0
Reset2:              0
TRANSMIT
Frames/second:       0  | Bytes/second:        0  | Errors/minute:       0
Discards/minute:     0  | Total frames:    46965m | Total bytes:     70249g
Total errors:        0  | Total discards:      0  | Multi/broadcast: 24492
Queue overflows:     0  | No buffers:          0  | Max collisions:      0
Single collision:    0  | Multi collisions:    0  | Late collisions:     0
Timeout:             0  | Xon:                 0  | Xoff:                0
Jumbo:               0
LINK_INFO
Current state:       up | Up to downs:        35  | Auto:                on
Speed:            1000m | Duplex:            full | Flowcontrol:       full


-- interface  e0c  (187 days, 9 hours, 24 minutes, 15 seconds) --

RECEIVE
Frames/second:       8  | Bytes/second:     1136  | Errors/minute:       0
Discards/minute:     0  | Total frames:    38567m | Total bytes:     14734g
Total errors:        0  | Total discards:    137  | Multi/broadcast:     0
No buffers:        137  | Non-primary u/c:     0  | Tag drop:            0
Vlan tag drop:       0  | Vlan untag drop:     0  | CRC errors:          0
Runt frames:         0  | Fragment:            0  | Long frames:         0
Jabber:              0  | Alignment errors:    0  | Bus overruns:        0
Queue overflows:     0  | Xon:             13447k | Xoff:            13447k
Jumbo:               0  | Reset:               0  | Reset1:              0
Reset2:              0
TRANSMIT
Frames/second:     531  | Bytes/second:      637k | Errors/minute:       0
Discards/minute:     0  | Total frames:    36680m | Total bytes:     14128g
Total errors:        0  | Total discards:      0  | Multi/broadcast:  8423
Queue overflows:     0  | No buffers:          0  | Max collisions:      0
Single collision:    0  | Multi collisions:    0  | Late collisions:     0
Timeout:             0  | Xon:                 0  | Xoff:                0
Jumbo:               0
LINK_INFO
Current state:       up | Up to downs:        17  | Auto:                on
Speed:            1000m | Duplex:            full | Flowcontrol:       full


-- interface  e0d  (187 days, 9 hours, 24 minutes, 15 seconds) --

RECEIVE
Frames/second:     947  | Bytes/second:      106k | Errors/minute:       0
Discards/minute:     0  | Total frames:    36227m | Total bytes:     13249g
Total errors:        0  | Total discards:    109  | Multi/broadcast:     0
No buffers:        109  | Non-primary u/c:     0  | Tag drop:            0
Vlan tag drop:       0  | Vlan untag drop:     0  | CRC errors:          0
Runt frames:         0  | Fragment:            0  | Long frames:         0
Jabber:              0  | Alignment errors:    0  | Bus overruns:        0
Queue overflows:     0  | Xon:             15037k | Xoff:            15037k
Jumbo:               0  | Reset:               0  | Reset1:              0
Reset2:              0
TRANSMIT
Frames/second:     524  | Bytes/second:      644k | Errors/minute:       0
Discards/minute:     0  | Total frames:    36663m | Total bytes:     14124g
Total errors:        0  | Total discards:      0  | Multi/broadcast: 10206
Queue overflows:     0  | No buffers:          0  | Max collisions:      0
Single collision:    0  | Multi collisions:    0  | Late collisions:     0
Timeout:             0  | Xon:                 0  | Xoff:                0
Jumbo:               0
LINK_INFO
Current state:       up | Up to downs:        17  | Auto:                on
Speed:            1000m | Duplex:            full | Flowcontrol:       full


-- interface  lo  (187 days, 9 hours, 23 minutes, 49 seconds) --

RECEIVE
Packets:         22488k | Bytes:           45935m | Errors:              0
Queue full:          0
TRANSMIT
Packets:         22488k | Bytes:           45935m | Errors:              0
Collisions:          0


-- interface  vh  (187 days, 9 hours, 23 minutes, 49 seconds) --

RECEIVE
Packets:             0  | Bytes:               0  | Errors:              0
Queue full:          0
TRANSMIT
Packets:             0  | Bytes:               0  | Errors:              0
Collisions:          0


-- interface  vif0  (187 days, 9 hours, 23 minutes, 33 seconds) --

RECEIVE
Total frames:      687m | Frames/second:     995  | Total bytes:     40746g
Bytes/second:      115k | Multi/broadcast: 69276k
TRANSMIT
Total frames:     2739m | Frames/second:    1597  | Total bytes:     42418g
Bytes/second:     1937k | Multi/broadcast: 50320

chriskranz
18,636 Views

That all looks mostly okay. You have no errors or collisions, very few discared and most of the traffic seems to be perfectly fine. The only thing I noticed is that there are a few (although not hundreds) of "Up to downs" showing the the interface has gone down a certain number of times. This may be from your testing, but could also show the ports are flapping (being set down and up by the switch).

I know you have already given some details on this, but can you double check the VIF mode on the filer, and what the link aggregation is setup on the Cisco side please? Can you also double check the ports that are in the VIF, and that they are into the correct ports on the Cisco? I know this is basic stuff, but it is often useful to take a step back and completely rule this out.

discretixit
18,636 Views

hi

i check the port on cisco they connect to correct place.

vif status vif0
default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'
vif0: 3 links, transmit 'Round-Robin Load balancing', VIF Type 'multi_mode' fail 'default'
         VIF Status     Up      Addr_set
        up:

also i add screenshot on cisco etherchannel conf (2323.bmp)

thanks

chriskranz
18,636 Views

You have the VIF configured with round-robin load balancing? I would also recommend to use either IP or MAC based hashing. Can you try change this back to the default (Source based IP Hash) and see if you still get the same issues please? I think this may be conflicting with the Cisco configuration.

discretixit
18,636 Views

hi

i change the vif from rr to ip :

vif status vif0
default: transmit 'IP Load balancing', VIF Type 'multi_mode', fail 'log'
vif0: 3 links, transmit 'IP Load balancing', VIF Type 'multi_mode' fail 'default'
         VIF Status     Up      Addr_set
        up:

and the same problem (i copy file from server to NetApp with ~17 MB/s)

any ideas ?

danielpr
18,326 Views

Can you please share your  /etc/rc file and ifconfig -a output ?

Thanks

Daniel

discretixit
18,328 Views

hi

ifconfig -a
e0a: flags=848043<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        ether ******* (auto-1000t-fd-up) flowcontrol full
        trunked vif0
e0b: flags=4848043<UP,BROADCAST,RUNNING,MULTICAST,NOWINS> mtu 1500
        inet *.*.*.* netmask 0xffffff00 broadcast *.*.0.255
        ether ****** (auto-1000t-fd-up) flowcontrol full
e0c: flags=848043<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        ether ***** (auto-1000t-fd-up) flowcontrol full
        trunked vif0
e0d: flags=848043<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        ether **** (auto-1000t-fd-up) flowcontrol full
        trunked vif0
lo: flags=1948049<UP,LOOPBACK,RUNNING,MULTICAST,TCPCKSUM> mtu 9188
        inet 127.0.0.1 netmask 0xff000000 broadcast 127.0.0.1
vif0: flags=848043<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
        inet ***** netmask 0xffff0000 broadcast *.*.255.255
        ether *** (Enabled virtual interface)

rc output

#Regenerated by registry Tue Mar 10 16:01:30 IDT 2009
#Auto-generated by setup Thu Jan 19 08:10:19 IST 2006
hostname
vif create multi vif0 -b ip e0d e0c e0a
#Disabled# ifconfig e0a `hostname`-e0a mediatype auto flowcontrol full
ifconfig e0b `hostname`-e0b netmask 255.255.255.0 mediatype auto flowcontrol full -wins
ifconfig vif0 `hostname`-vif0 netmask 255.255.0.0 mtusize 1500 wins
ifconfig e0a mediatype auto
ifconfig e0c mediatype auto
ifconfig e0d mediatype auto
route add default 172.16.1.1 1
routed on
options dns.domainname domain.com
options dns.enable on
options nis.domainname domain
options nis.enable on
savecore
timezone Israel

danielpr
18,327 Views

Cohen,

Thanks for sharing the information, the problem is looking different here. You are very sure  about the MTU size and the auto speed settings in all the

three medium but still you are facing the problem. We need to debug more here, Can you please share "netdiag" and "sysconfig -v"output.

Thanks

Daniel

discretixit
18,327 Views

hi

thanks again

danielpr
18,381 Views

Hi,

The output clearlysays there is the problem with the other side network components (Switch & System) basically the 244

is very low in CIFS. You need to check the iconfiguration of the interface "from host: **** " . There is no issue with the

Netapp Storage System side.


>Average size of CIFS TCP packets received from host: **** is 244.
>This is less than the MTU (1500 bytes) of the interface involved in
>the data transfer.
>The maximum segment size being used by TCP for this host is: 1460.
>Low average size of packets received by this system might be
>because of a misconfigured client system, or a poorly written
>client application.
>Press enter to continue

Thanks

Daniel

Public