Subscribe

instances requested for the nfsv3 object exceeds the data capacity of the performance subsystem

Hello All,

 

Trying to find out how to modify the maximum number os instances from an NFS performance query with ZAPI.

 

Call gets this error:

 

<results reason="Aggregated instances requested for the nfsv3 object exceeds the data capacity of the performance subsystem, because it includes 15552 constituent instances. With the current counter set, use the -node, -vserver, or -filter flags to include at most 6304 constituent instances in order to stay within the data capacity. Alternatively, requesting fewer counters will also reduce the required data and may allow more instances to be requested.resource limit exceeded" status="failed" errno="13001"/></netapp>

 <results reason="Aggregated instances requested for the nfsv3 object exceeds the data capacity of the performance subsystem, because it includes 15552 constituent instances. With the current counter set, use the -node, -vserver, or -filter flags to include at most 6304 constituent instances in order to stay within the data capacity. Alternatively, requesting fewer counters will also reduce the required data and may allow more instances to be requested.resource limit exceeded" status="failed" errno="13001"/></netapp>

 

I have read about a limit in the number of instances as a performance cap, or even to avoid time-outs, however I cannot confirm this.

 

Any engineering resources would be useful,

 

 

Re: instances requested for the nfsv3 object exceeds the data capacity of the performance subsystem

Just to be clear, you're making a single call to perf-object-get-instances and it's wanting to return 15k+ results, which is leading to the error?

 

Is there any way you can break the request into multiple and aggregate the results on the client side?  For example, target each SVM, or volumes starting with each letter of the alphabet?

 

Andrew

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.