Active IQ Unified Manager Discussions

NetApp Harvest v1.3 is available!

madden
17,158 Views

I am pleased to announce that Harvest v.1.3is now available on the NetApp Support Toolchest!   This feature release includes lots of new counters, dashboard panels, and of course, bug fixes.  For those looking for support with ONTAP 9 and 9.1, and OCUM 6.4/7.0/7.1 its in there too.

 

More info is in my blog here: Announcing NetApp Harvest v1.3 

And the software can be downloaded here:  NetApp Harvest v1.3

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

 

 

31 REPLIES 31

Alex_W
7,950 Views

Thanks...! That's what is happing. We do have data showing for that dashboard.

 

I had created another thread for this. I've updated it with the information you sent and set it as resolved.

 

Regards,

 

Alex

CFidder
7,941 Views

Awesome, i was waiting for this release! 1.2.2 was already great so cant wait to upgrade it to 1.3.

 

Keep up the nice work!

dlmaldonado
7,835 Views

Amazing work. Thank you for continuing to make this better and better. Love having this in my toolbox.

moep
7,811 Views

Works well with Grafana 4.0.1

rcasero
7,806 Views

Good morning team, I upgraded and noticed a few weeks later that any capacity matrics I had had stopped reporting, if I add any new luns/volumes it will report only under "Netapp-Harvest Lun "

 

Any help is greatly appreciated... I have put in some screen shots.

 

 

madden
7,788 Views

Hi @rcasero

 

Seems odd.  Have you checked if your disk is full?  If your disk is full existing metrics will continue to be updated but no new metrics can be created.  If your disk is not full please check the logfile for the poller in /opt/netapp-harvest/log/<pollername>.log for clues what could be going wrong.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

rcasero
7,783 Views

Chris here is my output...   Disk space is good...

 

 

root@s1lsnvcp1:/var/lib/grafana# df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            2.0G  4.0K  2.0G   1% /dev
tmpfs           396M  696K  395M   1% /run
/dev/sda1       233G   41G  181G  19% /
none            4.0K     0  4.0K   0% /sys/fs/cgroup
none            5.0M     0  5.0M   0% /run/lock
none            2.0G     0  2.0G   0% /run/shm
none            100M     0  100M   0% /run/user

 

 

Output:

 

[2016-10-21 19:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-21 23:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-22 02:15:00] [WARNING] [aggregate] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-22 02:15:00] [WARNING] [aggregate] data-list update failed.
[2016-10-22 02:15:00] [WARNING] [volume] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-22 02:15:00] [WARNING] [volume] data-list update failed.
[2016-10-22 02:15:00] [WARNING] [lun] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-22 02:15:00] [WARNING] [lun] data-list update failed.
[2016-10-22 02:15:00] [WARNING] [qtree] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-22 02:15:00] [WARNING] [qtree] data-list update failed.
[2016-10-22 03:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=15225, skips=0, fails=4
[2016-10-22 07:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-22 11:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-22 15:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-22 19:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-22 23:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-23 01:30:03] [WARNING] [aggregate] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:30:03] [WARNING] [aggregate] data-list update failed.
[2016-10-23 01:30:06] [WARNING] [volume] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:30:06] [WARNING] [volume] data-list update failed.
[2016-10-23 01:30:09] [WARNING] [lun] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:30:09] [WARNING] [lun] data-list update failed.
[2016-10-23 01:30:12] [WARNING] [qtree] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:30:12] [WARNING] [qtree] data-list update failed.
[2016-10-23 01:45:00] [WARNING] [aggregate] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:45:00] [WARNING] [aggregate] data-list update failed.
[2016-10-23 01:45:03] [WARNING] [volume] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:45:03] [WARNING] [volume] data-list update failed.
[2016-10-23 01:45:06] [WARNING] [lun] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:45:06] [WARNING] [lun] data-list update failed.
[2016-10-23 01:45:09] [WARNING] [qtree] update failed with reason: in Zapi::invoke, cannot connect to socket
[2016-10-23 01:45:09] [WARNING] [qtree] data-list update failed.
[2016-10-23 03:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=14210, skips=0, fails=8
[2016-10-23 07:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-23 11:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-23 15:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-23 19:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-23 23:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-24 03:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-24 07:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-24 11:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-24 15:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-24 19:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-24 23:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-25 03:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-25 07:15:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-25 09:35:18] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: S1WPVJSAN02]
[2016-10-25 09:35:18] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-10-25 09:35:18] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2016-10-25 09:35:18] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.4P2] successful.
[2016-10-25 09:35:18] [NORMAL ] [main] Using best-fit collection template: [ocum-6.4.0.conf]
[2016-10-25 09:35:18] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1wclust01]
[2016-10-25 09:35:18] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.S1WPVJSAN02]
[2016-10-25 09:35:18] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.
[2016-10-25 13:45:00] [NORMAL ] Poller status: status, secs=14982, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-25 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=12, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-25 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=12, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-26 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-26 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-26 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-26 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=12, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-26 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-26 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-27 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-27 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-27 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-27 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-27 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-27 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-28 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-28 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-28 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-28 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-28 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-28 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-29 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-29 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-29 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-29 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-29 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-29 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-30 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-30 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-30 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-30 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-30 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-30 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-31 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-31 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-31 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-31 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-31 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-10-31 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-01 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-01 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-01 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-01 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-01 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-01 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-02 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-02 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-02 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-02 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-02 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-02 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-03 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-03 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-03 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-03 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16240, skips=0, fails=0
[2016-11-03 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16294, skips=0, fails=0
[2016-11-03 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16336, skips=0, fails=0
[2016-11-04 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16336, skips=0, fails=0
[2016-11-04 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16336, skips=0, fails=0
[2016-11-04 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16336, skips=0, fails=0
[2016-11-04 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16336, skips=0, fails=0
[2016-11-04 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16357, skips=0, fails=0
[2016-11-04 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-05 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-05 05:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-05 09:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-05 13:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-05 17:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-05 21:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-06 01:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=11, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-06 04:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-06 08:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-06 12:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-06 16:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-06 20:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-07 00:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-07 04:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-07 08:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16384, skips=0, fails=0
[2016-11-07 12:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16417, skips=0, fails=0
[2016-11-07 16:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-07 20:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-08 00:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-08 04:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-08 08:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-08 12:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-08 16:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-08 20:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-09 00:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-09 04:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-09 08:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-09 12:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=16432, skips=0, fails=0
[2016-11-09 16:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=17079, skips=0, fails=0
[2016-11-09 20:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=18416, skips=0, fails=0
[2016-11-10 00:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=18416, skips=0, fails=0
[2016-11-10 04:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=18416, skips=0, fails=0
[2016-11-10 08:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=18416, skips=0, fails=0
[2016-11-10 12:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=18416, skips=0, fails=0
[2016-11-10 16:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=10, plugin_time=0, metrics=18416, skips=0, fails=0
[2016-11-10 20:45:00] [NORMAL ] Poller status: status, secs=14400, api_time=9, plugin_time=0, metrics=18416, skips=0, fails=0
[[2016-11-29 15:24:36] [NORMAL ] WORKER STARTED [Version: 1.3] [Conf: netapp-harvest.conf] [Poller: S1WPVJSAN02]
[2016-11-29 15:24:36] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-11-29 15:24:36] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2016-11-29 15:24:37] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.4P2] successful.
[2016-11-29 15:24:37] [NORMAL ] [main] Found best-fit monitoring template (same generation and major release, minor same or less): [ocum-6.4.0.conf]
[2016-11-29 15:24:37] [NORMAL ] [main] Added and/or merged monitoring template [/opt/netapp-harvest/template/default/ocum-6.4.0.conf]
[2016-11-29 15:24:37] [NORMAL ] [main] Metrics for cluster [s1wclust01] will be submitted with graphite_root [netapp.capacity.Denver.s1wclust01]
[2016-11-29 15:24:37] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.S1WPVJSAN02]
[2016-11-29 15:24:37] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.

madden
7,772 Views

Hi @rcasero

 

Are you sure the poller is running?  There are no log updates since 29-Nov and there should always be at least one status entry every 4 hrs.

 

Check if it is running:

 

/opt/netapp-harvest/netapp-manager -status

 

If not, start with this and wait at least 15 minutes to see if metrics are now coming in:

 

/opt/netapp-harvest/netapp-manager -start

 

If still no metrics restart in verbose mode, wait for 20 minutes, and then restart again in normal mode?:

 

/opt/netapp-harvest/netapp-manager -restart -poller S1WPVJSAN02 -v

<wait 20 minutes>
/opt/netapp-harvest/netapp-manager -restart -poller S1WPVJSAN02

Then provide the logfile /opt/netapp-harvest/log/S1WPVJSAN02_netapp-harvest.log.  You can also send this to me in a private message.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

 

 

rcasero
7,760 Views

I have attached the log, still nothing... I did not see anything that stood out...  Ok i cannot attach the log file... not sure why...

 

root@s1lsnvcp1:~# /opt/netapp-harvest/netapp-manager -status

STATUS          POLLER               GROUP               

############### #################### ##################  

[RUNNING]       S1W8040CTL01         Denver              

[RUNNING]       S1W8040CTL02         Denver              

[RUNNING]       S1WPVJSAN02          Denver              

[RUNNING]       s1wclust01           Denver 

 

below is the output of the netapp-harvest.conf file, for some crazy reason i don't remember it having so many lines...

 

root@s1lsnvcp1:/opt/netapp-harvest# cat  netapp-harvest.conf*

##

## Configuration file for NetApp Harvest

##

## Create a section header and then populate with key/value parameters

## for each system to monitor.  Lines can be commented out by preceding them

## with a hash symbol ('#').  Values in all capitals should be replaced with

## your values, all other values can be left as-is to use defaults

##

## There are two reserved section names:

## [global]  - Global key/value pairs for installation

## [default] - Any key/value pairs specified here will be the default

##             value for a poller should it not be listed in a poller section.

##

 

##

## Global reserved section

##

 

[global]

grafana_api_key = eyJrIjoiSG9GRG5MMTBlU1h4SzA5Ym1sZ09tWklPYlk0Q1ZCV0giLCJuIjoiTmV0QXBwLUhhcnZlc3QiLCJpZCI6MX0=

grafana_url = https://10.9.221.8:443

grafana_dl_tag = 

 

##

## Default reserved section

##

 

[default]

#====== Graphite server setup defaults ======================================== 

graphite_enabled  = 1              

graphite_server   = 10.9.221.8

graphite_port     = 2003           

graphite_proto    = tcp            

normalized_xfer   = mb_per_sec     

normalized_time   = millisec       

graphite_root     =  default

graphite_meta_metrics_root  = default      

 

#====== Polled host setup defaults ============================================

host_type         = FILER          

host_port         = 443           

host_enabled      = 1             

template          = default       

data_update_freq  = 60            

ntap_autosupport  = 0             

latency_io_reqd   = 10            

auth_type         = password     

username          = netapp-harvest 

password          = h1d3fvm1      

ssl_cert          = INSERT_PEM_FILE_NAME_HERE            

ssl_key           = INSERT_KEY_FILE_NAME_HERE           

    

 

##

## Monitored host examples - Use one section like the below for each monitored host

##

 

#====== 7DOT (node) or cDOT (cluster LIF) for performance info ================

#

[s1wclust01]

hostname       = 10.9.220.64

site           = Denver

 

[S1W8040CTL01]

hostname = 10.9.219.63

site = Denver

 

 

[S1W8040CTL02]

hostname = 10.9.219.65

site = Denver

 

#====== OnCommand Unified Manager (OCUM) for cDOT capacity info ===============

#

[S1WPVJSAN02]

hostname          = 10.9.239.129

site              = Denver

host_type         = OCUM                  

data_update_freq  = 900 

normalized_xfer   = gb_per_sec     

##

## Configuration file for NetApp Harvest

##

## This file is organized into multiple sections, each with a [] header

##

## There are two reserved section names:

##  [global]  - Global key/value pairs for installation

##  [default] - Any key/value pairs specified here will be the default

##              value for a poller should it not be listed in a poller section.

##

## Any other section names are for your own pollers:

##  [cluster-name]     - cDOT cluster (match name from cluster CLI prompt)

##  [7-mode-node-name] - 7-mode node name (match name from 7-mode CLI prompt)

##  [OCUM-hostname]    - OCUM server hostname (match hostname set to system)

 

## Quick Start Instructions:

## 1. Edit the [global] and [default] sections and replace values in all 

##    capital letters to match your installation details

## 2. For each system to monitor add a section header and  populate with

##    key/value parameters for it.

## 3. Start all pollers that are not running: /opt/netapp-harvest/netapp-manager start

##

## Note: Full instructions and list of all available key/value pairs is found in the

##       NetApp Harvest Administration Guide

 

##

#### Global section for installation wide settings

##

[global]

grafana_api_key   = INSERT_LONG_KEY_HERE

grafana_url       = INSERT_URL_OF_GRAFANA_WEB_INTERFACE_HERE

 

##

#### Default section to set defaults for any user created poller section

##

[default]

graphite_server   = INSERT_IP_OR_HOSTNAME_OF_GRAPHITE_SERVER_HERE

username          = INSERT_USERNAME_HERE

password          = INSERT_PASSWORD_HERE

 

## If using ssl_cert (and not password auth) 

## uncomment and populate next three lines

# auth_type         = ssl_cert

# ssl_cert          = INSERT_PEM_FILE_NAME_HERE

# ssl_key           = INSERT_KEY_FILE_NAME_HERE

 

##

#### Poller sections; Add one section for each cDOT cluster, 7-mode node, or OCUM server

#### If any keys are different from those in default duplicate them in the poller section to override.

##

 

# [INSERT_CLUSTER_OR_CONTROLLER_NAME_HERE_EXACTLY_AS_SHOWN_FROM_CLI_PROMPT]

# hostname       = INSERT_IP_ADDRESS_OR_HOSTNAME_OF_CONTROLLER_OR_CLUSTER_LIF_HERE

# group          = INSERT_GROUP_IDENTIFIER_HERE

 

# [INSERT_OCUM_SERVER_NAME_HERE]

# hostname          = INSERT_IP_ADDRESS_OR_HOSTNAME_OF_OCUM_SERVER

# group             = INSERT_GROUP_IDENTIFIER_HERE

# host_type         = OCUM                  

# data_update_freq  = 900 

# normalized_xfer   = gb_per_sec

 

rcasero
7,310 Views

If anyone can help uploading the log file is greatly appreciated... I browse to the file but seems that it just won't post...

madden
7,302 Views

Hi @rcasero

 

The conf file looks ok; you did "cat  netapp-harvest.conf*" which would get the conf file and the conf.example file 🙂

 

I will contact you via private message to troubleshoot this further.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

 

Public