Network and Storage Protocols

nblade.execsOverLimit: The number of in-flight requests from client with source IP x.x.x.x

TSunsLV

After upgrade to 9.9.1 evet log is full with theses errors:

 

ERROR nblade.execsOverLimit: The number of in-flight requests from client with source IP x.x.x.x to destination LIF x.x.x.x (Vserver 18) is greater than the maximum number of in-flight requests allowed (128). The client might see degraded performance due to request throttling.

 

I have already made changes as NetApp KB suggests on NetApp side. But, i can't really find, what settings I must change on VMWare ESX.

 

NFS.MaxQueueSize = 64 , is this one?

 

VMWare is 7.x version.

1 ACCEPTED SOLUTION

hmoubara

From the Vmware side, you are correct you will need to to set MaxQueueDepth parameter for each share from ESXCLI :
Example:

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 4294967295 131072 131072

[root@esx1:~]esxcli storage nfs param set -v <volume_name> -q 128

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 128 131072 131072

 Hope this answer your question.

 

Thanks 

View solution in original post

4 REPLIES 4

hmoubara

@ajs 
Yes it will require reboot of the host for the change to take effect. 

ajs

We are seeing this also on esx7 + 9.8P4. VMware support told me to set SunRPC.MaxConnPerIP and NFS.MaxQueueDepth to 32 (in the UI) but this did not help. When I look at the values for the volumes with esxcli they are still at the default (4294967295).

 

I presume 'esxcli storage nfs param set -v <volume_name> -q 128' will require a reboot of the host?

jtownsen

Also be aware that the NFS.MaxQueueDepth is only used with NFS3.  If you use NFS4, you'll need to modify the ontap setting v4.x-session-num-slots.  We suggest to modify this to no higher than 128.  

 

nas-cm98::*> nfs show -vserver jt98 -fields v4.x-session-num-slots
vserver v4.x-session-num-slots
------- ----------------------
jt98 180

hmoubara

From the Vmware side, you are correct you will need to to set MaxQueueDepth parameter for each share from ESXCLI :
Example:

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 4294967295 131072 131072

[root@esx1:~]esxcli storage nfs param set -v <volume_name> -q 128

[root@esx1:~]esxcli storage nfs param get -v <volume_name>
Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize
----------- ------------- ------------------- --------------------
volume_name 128 131072 131072

 Hope this answer your question.

 

Thanks 

View solution in original post

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public