Network and Storage Protocols
Network and Storage Protocols
Hello Team,
We are using NVMe over TCP with a NetApp ONTAP backend.
On the host side, the network interfaces are configured as an LACP bond (802.3ad) with 2 × 25 Gbps NICs.
I would like to confirm my understanding of how NVMe-TCP traffic utilizes bonded interfaces:
NVMe-TCP uses multiple TCP connections (submission and completion queues).
With LACP bonding, traffic distribution is based on a hashing algorithm (e.g., src/dst IP and ports).
As a result, each individual TCP flow is pinned to a single physical link within the bond.
Therefore, a single NVMe queue / TCP connection is limited to the bandwidth of one physical NIC (25 Gbps), and the aggregate 50 Gbps bandwidth is only achievable when multiple flows are distributed across both links.
Is this understanding correct from an ONTAP and NVMe-TCP perspective?
Additionally, are there any ONTAP-specific best practices or recommendations to ensure optimal link utilization and minimize latency for NVMe-TCP in bonded NIC environments?
Thank you for your guidance.
Solved! See The Solution
It is available in the vserver nvme subsystem host add command (see the example below), it may be available in the diag level or advanced level.
Note: changing the default values must be used with moderation , so a single system doesnt end up using all the capacity for single Host. You can use following commands at host add ,
vserver nvme subsystem host add -subsystem subsys-host-nqn nqn.1992-08.com.netapp:sn.15f80c2d069a11f09c5bd039ea979f76:subsystem.hostnqn -priority high
There are 2 options for priority high and regular, priority high comes with higher queue count.
----
thank you for your response, exactly where to do this "IO Queue Count” setting in the ONTAP NVMe Subsystem or Host NQN screen"
Netapp Array or in the Host side(in our case it is a Linux VM)
It is available in the vserver nvme subsystem host add command (see the example below), it may be available in the diag level or advanced level.
Note: changing the default values must be used with moderation , so a single system doesnt end up using all the capacity for single Host. You can use following commands at host add ,
vserver nvme subsystem host add -subsystem subsys-host-nqn nqn.1992-08.com.netapp:sn.15f80c2d069a11f09c5bd039ea979f76:subsystem.hostnqn -priority high
There are 2 options for priority high and regular, priority high comes with higher queue count.
Thank you, got this along with this in our case we also need to use multique for virtio.
Means hypervisor & NVMe both multique