Hi @fede_melaccio
I have have seen other reports of much higher throughput of the 'volume' counters and after researching (netapp-worker -v and comparing raw counter data) discovered that the underlying ONTAP counters were incorrect; garbage in, garbage out they say.
An easy way to tell if the raw counters are buggy is to check statistics from the CLI (lun_2 is my volume name):
sdt-cdot1::> statistics show-periodic -object volume -instance lun_2 -counter write_data|instance_name
sdt-cdot1: volume.lun_2: 9/6/2016 04:11:34
instance write
name data
-------- --------
lun_2 55.2MB
lun_2 54.3MB
If you see that the write_data value is crazy high then open a support case and suggest they look at:
bug 1048529 - "write_data value in volume stats is unreliable"
By the way, the QoS based counters should be accurate still, so if you check on the Volume dashboard QoS rows you could also see if you have a big difference between those counter values and those from the wafl/volume row on the same dashboard.
Cheers,
Chris Madden
Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)
Blog: It all begins with data
If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!