That is latency for operations other than reads or writes. For example, in an NFS environment, metadata operations such as GETATTR and ACCESS calls would be other_ops that are measured for response time with other_latency.
The nfsv3 counter object in DataONTAP tells you what all these 'other' operations are for NFS. I'm pretty sure there is a similar one for CIFS. Here is a list of the NFSv3 operations other than read & write:
There are three counters in the nfsv3 counter object that are extremely useful for monitoring these "other" operations if you need to. You can create custom views in Performance Advisor that will show each of these counters over time. Very useful for troubleshooting "chatty" NFS applications that generate a lot of 'other' NFS operations.
Name: nfsv3_op_percent Description: Array of select NFS v3 operations as a percentage of total NFS v3 operations Properties: percent Unit: percent Size: 22 column array Column names: null, getattr, setattr, lookup, access, readlink, read, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, readdir, readdirplus, fsstat, fsinfo, pathconf, commit Base Name: nfsv3_ops Base Description: Total number of NFS v3 operations per second Base Properties: rate Base Unit: per_sec
Name: nfsv3_op_latency Description: Array of latencies of select NFS v3 operations Properties: average Unit: microsec Size: 22 column array Column names: null, getattr, setattr, lookup, access, readlink, read, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, readdir, readdirplus, fsstat, fsinfo, pathconf, commit Base Name: nfsv3_op_latency_base Base Description: Array of select NFS v3 operation counts for latency calculation Base Properties: delta,no-display Base Unit: none Base Size: 22 column array Base Column names: null, getattr, setattr, lookup, access, readlink, read, write, create, mkdir, symlink, mknod, remove, rmdir, rename, link, readdir, readdirplus, fsstat, fsinfo, pathconf, commit
We had the same exact issue. Other_ops were showing up in perf advisor and completely throwing off our graphs. After failed troubleshooting with both NetApp and VMware we finally determined that the Veeam Monitoring platform was constantly enumerating our NFS datastores. After shutting off the Veeam collector service, other_ops went away. I believe Veeam released an update which fixes this behavior, but I wouldn't doubt that other monitoring platforms could cause the same issue.
Does anyone know what kind of operations would be constituted as other_ops and included in other_latency for a volume that is accessed by FCP only? I have a customer that is seeing sub 5ms latency for both reads and writes but he is concerned about his other_latency spikes of 40ms. Any assistance would be appreciated.
I am running into this problem as well. I'm seeing 600 millisecond other_latency spikes within the Volume Latency View in performance advisor. This is a Fibre Channel LUN which is storing Oracle Data files.
My experience has changed since the comments I added to this thread last June. I had typically seen only protocol based traffic and latency data in the volume based counters, back in the 7.3 days. With 8.0, I have confirmation that some system operations can show up in these counters, and I've seen that on the systems I've looked at as well.
There is a way though to narrow down this issue to determine if there are system operations or protocol operations that are causing the volume latency to be high. There are a set of volume counters for every protocol that can be used if you enable them with the "dfm options set perfAdvisorShowDiagCounters=Enabled" command. For example, instead of using counter like this:
Use the protocol based counters that look like this:
Those volume protocol counters exist for all protocols (fcp, iscsi, cifs, and nfs). Compare the volume protocol counters to the default volume counters, and see if the volume protocol counters are more in line with what you are expecting. Hope that helps,
System level work would be prioritized lower than protocol level work...which is likely why you will see high volume other_latency values that don't line up with the protocol specific other_latency counters if you look at each one individually.
I am using Performance Advisor to investigate performance issues on our FAS3270s. I have pinpointed several volumes with high latency, however in several cases it is due to "other" latency. This is true for several NFS, CIFS, and iSCSI volumes. I'm not seeing a clear-cut resolution in this thread to the question of how to determine exactly what "other" latency consists of on a volume. Does anyone else have any insight on this? Is the only option to open a case with NetApp Support to determine the cause on an individual basis?