Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
nblade.execsOverLimit: The number of in-flight requests from client with source IP x.x.x.x
2021-10-05
07:11 AM
27,928 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
After upgrade to 9.9.1 evet log is full with theses errors:
ERROR nblade.execsOverLimit: The number of in-flight requests from client with source IP x.x.x.x to destination LIF x.x.x.x (Vserver 18) is greater than the maximum number of in-flight requests allowed (128). The client might see degraded performance due to request throttling.
I have already made changes as NetApp KB suggests on NetApp side. But, i can't really find, what settings I must change on VMWare ESX.
NFS.MaxQueueSize = 64 , is this one?
VMWare is 7.x version.
Solved! See The Solution
1 ACCEPTED SOLUTION
TSunsLV has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
From the Vmware side, you are correct you will need to to set MaxQueueDepth parameter for each share from ESXCLI :
Example:
[root@esx1:~]esxcli storage nfs param get -v <volume_name> Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize ----------- ------------- ------------------- -------------------- volume_name 4294967295 131072 131072 [root@esx1:~]esxcli storage nfs param set -v <volume_name> -q 128 [root@esx1:~]esxcli storage nfs param get -v <volume_name> Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize ----------- ------------- ------------------- -------------------- volume_name 128 131072 131072
Hope this answer your question.
Thanks
7 REPLIES 7
TSunsLV has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
From the Vmware side, you are correct you will need to to set MaxQueueDepth parameter for each share from ESXCLI :
Example:
[root@esx1:~]esxcli storage nfs param get -v <volume_name> Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize ----------- ------------- ------------------- -------------------- volume_name 4294967295 131072 131072 [root@esx1:~]esxcli storage nfs param set -v <volume_name> -q 128 [root@esx1:~]esxcli storage nfs param get -v <volume_name> Volume Name MaxQueueDepth MaxReadTransferSize MaxWriteTransferSize ----------- ------------- ------------------- -------------------- volume_name 128 131072 131072
Hope this answer your question.
Thanks
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Also be aware that the NFS.MaxQueueDepth is only used with NFS3. If you use NFS4, you'll need to modify the ontap setting v4.x-session-num-slots. We suggest to modify this to no higher than 128.
nas-cm98::*> nfs show -vserver jt98 -fields v4.x-session-num-slots
vserver v4.x-session-num-slots
------- ----------------------
jt98 180
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Please refer to this KB https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Software/ONTAP_OS/execsOverLimit_error_message_found_in_event_logs_for_NFSv4_clients
- For NFS4.1 and NFSv4.2 the 128 limit still applies and can be controlled by lowering the option for -v4.x-session-num-slot to 128 (default is 180).
- Verify settings (CLI)
set -privilege advanced
vserver nfs show -vserver <SVM_NAME> -fields v4.x-session-num-slots - Change settings
set -privilege advanced
vserver nfs modify -vserver <SVM_NAME> -v4.x-session-num-slots 128 - This will only apply to sessions created after the configuration change.
- Active sessions will use the previous -v4.x-session-num-slot configuration.
- nfsv4 maintains session, so if modify this option, it should be applied after remount and session recreated.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
We are seeing this also on esx7 + 9.8P4. VMware support told me to set SunRPC.MaxConnPerIP and NFS.MaxQueueDepth to 32 (in the UI) but this did not help. When I look at the values for the volumes with esxcli they are still at the default (4294967295).
I presume 'esxcli storage nfs param set -v <volume_name> -q 128' will require a reboot of the host?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
@ajs
Yes it will require reboot of the host for the change to take effect.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Also seeing this on a new 9.9.1 AFF system with only 3 VMware ESXi 7.0.x hosts connected.
So looks like this should be configured to the same max queue depth at both ends for VMware and ONTAP? That seems to be implied by TR-4067 + this discussion but isn't explicit...
Thanks,
Steve
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Answering my own question by reading this thread properly and also finding that all the NetApp - recommended ESX host settings are documented here: https://docs.netapp.com/us-en/netapp-solutions/virtualization/vsphere_ontap_recommended_esxi_host_and_other_ontap_settings.html
Cheers,
Steve
