Network and Storage Protocols

nblade.execsOverLimit: The number of in-flight requests from client with source IP x.x.x.x

TSunsLV
9,878 Views

After upgrade to 9.9.1 evet log is full with theses errors:

 

ERROR nblade.execsOverLimit: The number of in-flight requests from client with source IP x.x.x.x to destination LIF x.x.x.x (Vserver 18) is greater than the maximum number of in-flight requests allowed (128). The client might see degraded performance due to request throttling.

 

I have already made changes as NetApp KB suggests on NetApp side. But, i can't really find, what settings I must change on VMWare ESX.

 

NFS.MaxQueueSize = 64 , is this one?

 

VMWare is 7.x version.

1 ACCEPTED SOLUTION

hmoubara
9,821 Views

From the Vmware side, you are correct you will need to to set MaxQueueDepth parameter for each share from ESXCLI :
Example:

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 4294967295 131072 131072

[root@esx1:~]esxcli storage nfs param set -v <volume_name> -q 128

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 128 131072 131072

 Hope this answer your question.

 

Thanks 

View solution in original post

6 REPLIES 6

hmoubara
9,822 Views

From the Vmware side, you are correct you will need to to set MaxQueueDepth parameter for each share from ESXCLI :
Example:

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 4294967295 131072 131072

[root@esx1:~]esxcli storage nfs param set -v <volume_name> -q 128

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 128 131072 131072

 Hope this answer your question.

 

Thanks 

jtownsen
9,764 Views

Also be aware that the NFS.MaxQueueDepth is only used with NFS3.  If you use NFS4, you'll need to modify the ontap setting v4.x-session-num-slots.  We suggest to modify this to no higher than 128.  

 

nas-cm98::*> nfs show -vserver jt98 -fields v4.x-session-num-slots
vserver v4.x-session-num-slots
------- ----------------------
jt98 180

ajs
9,641 Views

We are seeing this also on esx7 + 9.8P4. VMware support told me to set SunRPC.MaxConnPerIP and NFS.MaxQueueDepth to 32 (in the UI) but this did not help. When I look at the values for the volumes with esxcli they are still at the default (4294967295).

 

I presume 'esxcli storage nfs param set -v <volume_name> -q 128' will require a reboot of the host?

hmoubara
9,509 Views

@ajs 
Yes it will require reboot of the host for the change to take effect. 

unixnation
8,246 Views

Hi,

 

Also seeing this on a new 9.9.1 AFF system with only 3 VMware ESXi 7.0.x hosts connected.

So looks like this should be configured to the same max queue depth at both ends for VMware and ONTAP? That seems to be implied by TR-4067 + this discussion but isn't explicit...

 

Thanks,
Steve

unixnation
8,205 Views

Answering my own question by reading this thread properly and also finding that all the NetApp - recommended ESX host settings are documented here: https://docs.netapp.com/us-en/netapp-solutions/virtualization/vsphere_ontap_recommended_esxi_host_and_other_ontap_settings.html 

 

Cheers,

Steve

Public