Network and Storage Protocols

LACP & NVMe Volumes

arup_amtron
374 Views

Hello Team,

We are using NVMe over TCP with a NetApp ONTAP backend.
On the host side, the network interfaces are configured as an LACP bond (802.3ad) with 2 × 25 Gbps NICs.

I would like to confirm my understanding of how NVMe-TCP traffic utilizes bonded interfaces:

  • NVMe-TCP uses multiple TCP connections (submission and completion queues).

  • With LACP bonding, traffic distribution is based on a hashing algorithm (e.g., src/dst IP and ports).

  • As a result, each individual TCP flow is pinned to a single physical link within the bond.

  • Therefore, a single NVMe queue / TCP connection is limited to the bandwidth of one physical NIC (25 Gbps), and the aggregate 50 Gbps bandwidth is only achievable when multiple flows are distributed across both links.

Is this understanding correct from an ONTAP and NVMe-TCP perspective?
Additionally, are there any ONTAP-specific best practices or recommendations to ensure optimal link utilization and minimize latency for NVMe-TCP in bonded NIC environments?

Thank you for your guidance.

1 ACCEPTED SOLUTION

jaikumar
246 Views

It is available in the vserver nvme subsystem host add command (see the example below), it may be available in the diag level or advanced level.  

Note: changing the default values must be used with moderation , so a single system doesnt end up using all the capacity for single Host. You can use following commands at host add ,

vserver nvme subsystem host add -subsystem subsys-host-nqn nqn.1992-08.com.netapp:sn.15f80c2d069a11f09c5bd039ea979f76:subsystem.hostnqn -priority high

 

There are 2 options for priority high and regular, priority high comes with higher queue count.

View solution in original post

4 REPLIES 4

jaikumar
256 Views
  • Yes NVMe TCP uses multiple TCP Connections for each of the Subsystem. (Each of this Connections are also called as NVMe Controllers).
  • And each of this TCP Connections/Controllers have IO Queue and Admin Queue. By default NetApp allow 2 IO Queue Count and 128 Queue Depth.
  • Now to Optimize the NVMe TCP Configuration , you can increase the IO Queue Count during the NVMe Subsystem Create/Subsystem Host Add (have non default Queue  Count Value).
  • + If the Platform in ASAr2 then we do support NVMe Active/Active , which is IO Path from both Namespace Owning Node and its HA Partner are advertized as Optimized Path, by this Host will be able to use both the Nodes in HA.

----

 

 

arup_amtron
251 Views

thank you for your response, exactly where to do this "IO Queue Count” setting in the ONTAP NVMe Subsystem or Host NQN screen"
Netapp Array or in the Host side(in our case it is a Linux VM)

jaikumar
247 Views

It is available in the vserver nvme subsystem host add command (see the example below), it may be available in the diag level or advanced level.  

Note: changing the default values must be used with moderation , so a single system doesnt end up using all the capacity for single Host. You can use following commands at host add ,

vserver nvme subsystem host add -subsystem subsys-host-nqn nqn.1992-08.com.netapp:sn.15f80c2d069a11f09c5bd039ea979f76:subsystem.hostnqn -priority high

 

There are 2 options for priority high and regular, priority high comes with higher queue count.

arup_amtron
242 Views

Thank you, got this along with this in our case we also need to use multique for virtio.
Means hypervisor & NVMe both multique

Public