Hi,
In the netapp-harvest.conf file you will find a default key/value like this:
normalized_xfer = mb_per_sec
What it will do is normalize all throughput numbers to MB/s. So in Graphite and Grafana you are viewing in MB/s and not that of the native Data ONTAP counter manager counter being graphed. I found normalizing data to be a much easier way of working; you can always scale back to whatever unit you want if needed for your use case.
Regarding throughput being off, sometimes it is just user confusion because with cDOT the node that does the frontend protocol work is not necessarily the same that does the backend volume work. Depending on the object you're looking at you may see frontend or backend numbers. In the default "node" dashboard you will see "protocol backend drilldown" and then things like "FCP frontend drilldown" to show these both.
So in the "frontend" views you see very detailed information about the IOPs arriving that node. Those IOPs are then translated into WAFL messages and sent to the backend (on the same or different node) to be serviced. At the "backend" the messages are tagged with protocol but otherwise are only tracked as read/write/other vs much more detail tracked at the "frontend" node. If all traffic is direct (IOPs arrive on a LIF on the same node that owns the volume) then the "frontend" and "backend" numbers should agree, but if you have indirect traffic they will be different.
Maybe you can check your setup taking the above info into account and let us know if that helped?
--If it does, please also "accept as answer" the post that answered your question so that others will see the Q/A is answered.
Cheers,
Chris Madden
Storage Architect, NetApp EMEA (and author of Harvest)
Blog: It all begins with data