ONTAP Discussions

Create Interface Group MODE LACP

JuanCarlosrl

I have created a group interface (a0a), in which I have selected ports e0c (10Gb/s) and e0d (Gb/s),

Mode LACP and Load Distribution IP based.

 

I have told the network technician that those 2 ports that are on a switch, perform a level 3 lacp.

 

The problem is that I see that the lacp has been created correctly but in system manager if I select the port a0a (if group create) it tells me that the speed is 10Gb/s,

Would you show me the value of 20Gb / s? Is it correct to show me that lacp port a0a the 10Gb / s information?

 

If it's wrong, how should I go about doing a LACP and make it a 20Gb / s aggregate?

 

controller1::> network port ifgrp show -node controller1 -ifgrp a0a

Node: controller1
Interface Group Name: a0a
Distribution Function: ip
Create Policy: multimode_lacp
MAC Address: d2:39:ea:20:e7:05
Port Participation: full
Network Ports: e0c, e0d
Up Ports: e0c, e0d
Down Ports: -

 

controller1 😆 net port show -node controller1
(network port show)

Node: controller1
Speed(Mbps) Health
Port IPspace Broadcast Domain Link MTU Admin/Oper Status
--------- ------------ ---------------- ---- ---- ----------- --------
a0a Default Broadcast up 1500 auto/- healthy

 

In the CLI it does not show at what speed it is negotiating, but in the system manager, it shows me 10Gb/s

 

Is something wrong? If it was fine why not show 20Gb / s?

 

Thank you!!

 

 

1 ACCEPTED SOLUTION

paul_stejskal

This is how bonding of interfaces works, it doesn't aggregate the throughput, but rather provides multiple lanes for different ethernet communications. For example, if you have two hosts communicating at 10 Gb/s to storage, they will both get 10Gb/s speed, and the controller will do 2x10Gb/s. However if you have a single host, it will max out at 10Gb/s. The only way around that is if you have a technology that supports multiple IP addresses. CIFS and NFS as I am aware do not at this point in time (can't remember if SMB multichannel or the new NFS 4 protocol do), but iSCSI supports multipathing.

 

Basically, think of it as redundant 10Gb links, but not a combined link. This is a common point of confusion.

View solution in original post

4 REPLIES 4

paul_stejskal

This is how bonding of interfaces works, it doesn't aggregate the throughput, but rather provides multiple lanes for different ethernet communications. For example, if you have two hosts communicating at 10 Gb/s to storage, they will both get 10Gb/s speed, and the controller will do 2x10Gb/s. However if you have a single host, it will max out at 10Gb/s. The only way around that is if you have a technology that supports multiple IP addresses. CIFS and NFS as I am aware do not at this point in time (can't remember if SMB multichannel or the new NFS 4 protocol do), but iSCSI supports multipathing.

 

Basically, think of it as redundant 10Gb links, but not a combined link. This is a common point of confusion.

View solution in original post

aborzenkov

@paul_stejskal wrote:

if you have a single host, it will max out at 10Gb/s.


If load balancing supports L4 information (port-base load-balancing in NetApp) different simultaneous connections from the same host may go via different physical interfaces. Single connection is still limited to interface speed. Of course switches should be configured to use the same hashing algorithm as well.

JuanCarlosrl

So if I have understood correctly, I will never see 20Gb / s in the configuration, 10Gb / s appears, is that correct? Having 2 10Gb / s lacp ports, how does it behave? First it fills 1 path and then it goes the other path, or at most I will only have 10Gb / s despite having a LACP

 

Thanks,

It depends on your LACP config, but basically that's pretty close to correct.

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public