Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
Harvest against ONTAP 9 - throughput seems to be off by a factor of 1000
2016-11-21
09:53 AM
3,798 Views
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I had this issue again 9.0 with Harvest 1.2 and now 1.3. Seems throghput might have a conflict in the the unit of measure. Anyone else seeing this, or did I jack something up somewhere?
Solved! See The Solution
1 ACCEPTED SOLUTION
J_curl has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @J_curl
I think your screenshot comes from the netapp-dashboard-cluster (you didn't mention it!) and doublechecking my lab system it is reporting correctly with ONTAP 9.1RC1, and scrolling back in history it worked correctly with ONTAP 9.0 as well.
The panel populates using the vol_summary metrics branch so this should be the sum of read_data and write_data for all vols per node. You could cross reference with the node dashboard, or volume dashboard, to see if they are also off in your install.
Also check the logfile in case it reports some errors.
Lastly, in your netapp-harvest.conf file make sure either normalized_xfer is set to mb_per_sec, or not mentioned at all (in which case it defaults to mb_per_sec). If for the FILER perf poller you set this to gb_per_sec, like it is on purpose in the OCUM poller snippet, then your scaling would be off by 1024....
Hope this helps!
Cheers,
Chris Madden
Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)
Blog: It all begins with data
If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!
2 REPLIES 2
J_curl has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi @J_curl
I think your screenshot comes from the netapp-dashboard-cluster (you didn't mention it!) and doublechecking my lab system it is reporting correctly with ONTAP 9.1RC1, and scrolling back in history it worked correctly with ONTAP 9.0 as well.
The panel populates using the vol_summary metrics branch so this should be the sum of read_data and write_data for all vols per node. You could cross reference with the node dashboard, or volume dashboard, to see if they are also off in your install.
Also check the logfile in case it reports some errors.
Lastly, in your netapp-harvest.conf file make sure either normalized_xfer is set to mb_per_sec, or not mentioned at all (in which case it defaults to mb_per_sec). If for the FILER perf poller you set this to gb_per_sec, like it is on purpose in the OCUM poller snippet, then your scaling would be off by 1024....
Hope this helps!
Cheers,
Chris Madden
Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)
Blog: It all begins with data
If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Chris
Ah, I had commented out some lines in config file, missed one that contained normalized_xfer = gb+per+sec. So it was using that under the last host entry, which happened to be that cluster in question. Thanks, working fine now!
