That is pretty normal. The thing about an "op" is that it's not fixed in size. So a tiny cached metadata op = one op as does a large write. Thus, it's not unusual to see high ops with low network throughput and lower ops with higher network throughput. If I'm doing 32K writes, 10 "ops" is 320K on the network (plus some overhead). But if my metadata lookup is 1K, it would take 32 ops to generate the same amount of network throughput. That's over 3X the ops for the same amount on the network throughput.
As to which is better. I'm sure not you're asking the right question. In general, large ops perform better than small ones assuming you are going to disk, but what is better and can you control that given that users tend to do what they will do. Sometimes you can get them to change their applications, but that can be a tough road. The stat I'd rather track is latency. When I think performance, I always start there since that will dictate the user experience more than anything else. And as a storage vendor, I care about the latency of the request coming in and the reply going out. If that is good and the end-user latency is bad, then it's probably something outside of my control and I have to bring in the network or host people to figure out the problem. In my view, in-box latency at or under 10ms is pretty good for most environments. That's not that it won't occasionally go above that, but if it's at that level or below, most users don't complain about performance. This is, of course, a rule of thumb so feel free to establish your own threashold based on your environment but if you don't have one, 10ms is probably a good starting point. So I start with controller latency, then if that's not acceptable, I then look to other components for a bottleneck. That can be cache, disk utilization, CPU, or network, and probably a couple of things I haven't thought of at this moment, but hopefully you get the idea.
Any help?