Active IQ Unified Manager Discussions

Harvest against ONTAP 9 - throughput seems to be off by a factor of 1000

J_curl
3,185 Views

I had this issue again 9.0 with Harvest 1.2 and now 1.3.  Seems throghput might have a conflict in the the unit of measure.  Anyone else seeing this, or did I jack something up somewhere?

 

9.0.jpg9.0Vol.jpg

 

8.3.jpg

1 ACCEPTED SOLUTION

madden
3,167 Views

Hi @J_curl

 

 

I think your screenshot comes from the netapp-dashboard-cluster (you didn't mention it!) and doublechecking my lab system it is reporting correctly with ONTAP 9.1RC1, and scrolling back in history it worked correctly with ONTAP 9.0 as well.

 

 

The panel populates using the vol_summary metrics branch so this should be the sum of read_data and write_data for all vols per node.  You could cross reference with the node dashboard, or volume dashboard, to see if they are also off in your install.  

 

Also check the logfile in case it reports some errors.  

 

Lastly, in your netapp-harvest.conf file make sure either normalized_xfer is set to mb_per_sec, or not mentioned at all (in which case it defaults to mb_per_sec).  If for the FILER perf poller you set this to gb_per_sec, like it is on purpose in the OCUM poller snippet, then your scaling would be off by 1024....

 

Hope this helps!

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

View solution in original post

2 REPLIES 2

madden
3,168 Views

Hi @J_curl

 

 

I think your screenshot comes from the netapp-dashboard-cluster (you didn't mention it!) and doublechecking my lab system it is reporting correctly with ONTAP 9.1RC1, and scrolling back in history it worked correctly with ONTAP 9.0 as well.

 

 

The panel populates using the vol_summary metrics branch so this should be the sum of read_data and write_data for all vols per node.  You could cross reference with the node dashboard, or volume dashboard, to see if they are also off in your install.  

 

Also check the logfile in case it reports some errors.  

 

Lastly, in your netapp-harvest.conf file make sure either normalized_xfer is set to mb_per_sec, or not mentioned at all (in which case it defaults to mb_per_sec).  If for the FILER perf poller you set this to gb_per_sec, like it is on purpose in the OCUM poller snippet, then your scaling would be off by 1024....

 

Hope this helps!

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

J_curl
3,086 Views

Hi Chris

 

Ah, I had commented out some lines in config file, missed one that contained normalized_xfer = gb+per+sec.  So it was using that under the last host entry, which happened to be that cluster in question.  Thanks, working fine now!

 

 

Public