That is pretty normal. The thing about an "op" is that it's not fixed in size. So a tiny cached metadata op = one op as does a large write. Thus, it's not unusual to see high ops with low network throughput and lower ops with higher network throughput. If I'm doing 32K writes, 10 "ops" is 320K on the network (plus some overhead). But if my metadata lookup is 1K, it would take 32 ops to generate the same amount of network throughput. That's over 3X the ops for the same amount on the network throughput.
As to which is better. I'm sure not you're asking the right question. In general, large ops perform better than small ones assuming you are going to disk, but what is better and can you control that given that users tend to do what they will do. Sometimes you can get them to change their applications, but that can be a tough road. The stat I'd rather track is latency. When I think performance, I always start there since that will dictate the user experience more than anything else. And as a storage vendor, I care about the latency of the request coming in and the reply going out. If that is good and the end-user latency is bad, then it's probably something outside of my control and I have to bring in the network or host people to figure out the problem. In my view, in-box latency at or under 10ms is pretty good for most environments. That's not that it won't occasionally go above that, but if it's at that level or below, most users don't complain about performance. This is, of course, a rule of thumb so feel free to establish your own threashold based on your environment but if you don't have one, 10ms is probably a good starting point. So I start with controller latency, then if that's not acceptable, I then look to other components for a bottleneck. That can be cache, disk utilization, CPU, or network, and probably a couple of things I haven't thought of at this moment, but hopefully you get the idea.
I probably should of stated the question in a different way. If you had a choice between two heads to host several VM sessions via NFS which one would be preferred. - The one with the higher ops and lower network utilization or the head with the higher network utilization and lower ops?
If I have to choose, I typically like better throughput than ops. In general, larger ops perform better than lots of little ones...especially if you are hitting disk rather than cache. But like I said, sometimes you don't get to control that so I tend to rely on latency as my real guide.
In many VM environments, you typically need lots of IOPS, because the IO generated by the different guests is mostly random IO with a small block size. You typically also see that the actualy throughput is very low because of this.
So you need to choose the head that can offer the most IOPS possible. If the number of spindles is comparable between the two heads, and one head is already serving a high number of IOPS, it would probably be better to choose the other head (assuming it still has more spare capacity to serve IOPS than the busy head).
If the disk configuration between the heads is different (say FC versus SATA or many more spindles on one head), you need to choose the head that can offer the mosts IOPS (largest number of spindles in the aggregate or fastest disk technology).