Effective December 3, NetApp adopts Microsoft’s Business-to-Customer (B2C) identity management to simplify and provide secure access to NetApp resources.
For accounts that did not pre-register (prior to Dec 3), access to your NetApp data may take up to 1 hour as your legacy NSS ID is synchronized to the new B2C identity.
To learn more, read the FAQ and watch the video.
Need assistance? Complete this form and select “Registration Issue” as the Feedback Category.

Active IQ Unified Manager Discussions

Harvest against ONTAP 9 - throughput seems to be off by a factor of 1000

J_curl

I had this issue again 9.0 with Harvest 1.2 and now 1.3.  Seems throghput might have a conflict in the the unit of measure.  Anyone else seeing this, or did I jack something up somewhere?

 

9.0.jpg9.0Vol.jpg

 

8.3.jpg

1 ACCEPTED SOLUTION

madden

Hi @J_curl

 

 

I think your screenshot comes from the netapp-dashboard-cluster (you didn't mention it!) and doublechecking my lab system it is reporting correctly with ONTAP 9.1RC1, and scrolling back in history it worked correctly with ONTAP 9.0 as well.

 

 

The panel populates using the vol_summary metrics branch so this should be the sum of read_data and write_data for all vols per node.  You could cross reference with the node dashboard, or volume dashboard, to see if they are also off in your install.  

 

Also check the logfile in case it reports some errors.  

 

Lastly, in your netapp-harvest.conf file make sure either normalized_xfer is set to mb_per_sec, or not mentioned at all (in which case it defaults to mb_per_sec).  If for the FILER perf poller you set this to gb_per_sec, like it is on purpose in the OCUM poller snippet, then your scaling would be off by 1024....

 

Hope this helps!

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

View solution in original post

2 REPLIES 2

madden

Hi @J_curl

 

 

I think your screenshot comes from the netapp-dashboard-cluster (you didn't mention it!) and doublechecking my lab system it is reporting correctly with ONTAP 9.1RC1, and scrolling back in history it worked correctly with ONTAP 9.0 as well.

 

 

The panel populates using the vol_summary metrics branch so this should be the sum of read_data and write_data for all vols per node.  You could cross reference with the node dashboard, or volume dashboard, to see if they are also off in your install.  

 

Also check the logfile in case it reports some errors.  

 

Lastly, in your netapp-harvest.conf file make sure either normalized_xfer is set to mb_per_sec, or not mentioned at all (in which case it defaults to mb_per_sec).  If for the FILER perf poller you set this to gb_per_sec, like it is on purpose in the OCUM poller snippet, then your scaling would be off by 1024....

 

Hope this helps!

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

View solution in original post

J_curl

Hi Chris

 

Ah, I had commented out some lines in config file, missed one that contained normalized_xfer = gb+per+sec.  So it was using that under the last host entry, which happened to be that cluster in question.  Thanks, working fine now!

 

 

Announcements
NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner
Public