ONTAP Discussions

multimode_lacp port utilization imbalance

iamsam
1,778 Views

hi
I have ifgrp a0a bonding e4a & e4b in each of my A800 nodes. distr-func is set to "ip".
port utilization is not in balance, as shown below:

Screenshot from 2022-08-06 18-26-51.png

Clients (openstack nodes) are in the same subnet of ontap Data Network, Connected through layer2 switch with LACP configured in 802.3ad. The imbalance exist on both received & sent traffic. What can I do to distribute traffic equally between e4a & e4b ?

3 REPLIES 3

GM
1,729 Views

Hi,

 

Is it being most utilized by a single client? As you suing the "IP" load balance, if the traffic generated by one client it is likely not to spread evenly:

"IP: Second-best load distribution method, since the IP addresses of both sender
(LIF) and client are used to deterministically select the particular physical link that a
packet traverses. Although deterministic in the selection of a port, the balancing is
performed using an advanced hash function. This has been found to work under a
wide variety of circumstances, but particular selections of IP addresses might still
lead to unequal load distribution."

 

If you do expect the workload to stay the same. You can maybe add an extra LIF on the IFGRP and try to spread the workload on the protocol level (e.g multichannel in SMB/NFS or perhaps pNFS, MPIO in iSCSI etc). There's no guarantee it will actually split in this scenario either - it depends on what the IP address is.

 

Allso, with multichannel in SMB I noticed in my cluster that it is not even trying to use the other LIF on the IFGRP, so I'm planing to break the IFGRP on mine, and let multichannel manage it fully...

TMACMD
1,717 Views

I generally use the "port" balancing instead of IP. generally works better

iamsam
1,560 Views

Changing distr-func shows a difference. "sequential" method is good but I see error packets start showing on ifgrp members, mostly the network switches don't like it!
this will take time to find the best option to use in my use case (openstack nodes).
thanks a lot GM & TMACMD

Public