Other IOPs are anything that isn't a read or a write. The op count should be very low in FC/iSCSI environments, but can be significant in some NAS workloads since the storage presents the filesystem. So actions like "get file list in directory" or "get last modification date of a file" result in an other IOP that the storage controller will have to serve. Windows clients (especially older ones that run SMB 1 like win xp) are very chatty and issue a lot of other IOPs. Software build environments over NFS also typically have high other IOPs. Sometimes people also have software that walk the filesystem from their client (either on purpose, or part of some poorly written code) that generate a lot of other IOPs.
So to answer your question, there is no "normal" level. If the level is high it's because your clients are issuing those IOPs. For a point in time view of which volumes are receiving those requests use "stats show volume:*:other_ops".
For how it affects performance, other IOPs typically consume little disk IO (assuming there is enough system memory or flashcache/flashpool these are usually cached) but they do consume CPU to respond to.
Any IOPS that is not a read or wrtie - is called as other IOPS.There is no normal level for them.
As the previous contributor mentioned - all the client side activities like getattr,listing of files,searching done by the scripts on the client (i.e ls command) are the man reasons for the other IOPS.
Typically in any setup or client location - a netapp box is supposed to handle specific number of IOPS only to attain some level or performance.If there are clients which generate high other IOPS - the overall IOPS in the system would increase confusing the admins whether this filer will be able to take further load or not and also if further allocations can be performed here.Though HIGH_IOPS should not be a problem it is always better to find out those servers which generate very high OTHER_IOPS and try to reduce their usage by tweaking some options on the client UNIX machin side.
The above response from Chris appears to be a little dated and possibly no longer completely accurate.
I can use an array here as proof by counterexample.
This is an all-flash FAS in a MetroCluster configuration with only FC connectivity -- NAS is turned off. Consequently there can be no NAS workload. In-line and post-process compression/de-dupe are enabled. "Other" IOPS at times can be quite high - see below. My hypothesis is these IOPS are driven by the periodic post-process de-dupe job but have not taken time to prove or disprove: it does not appear to affect performance and the storage efficiency rate on this array is quite good.