Active IQ Unified Manager Discussions

NetApp Harvest Graphite Issue

rcasero
6,985 Views

I have been trying to resolve this in numerous places, I'm having a continueous issue with trying to view Capacity in Grafana... I have tried mostly everything i have seen from Chris and numerous other forums but with no luck.

 

Below are my configs and and different outputs...

 

__________-------------------------------------___________--------------________

root@s1lsnvcp1:/opt/netapp-harvest# curl -k -v https://10.9.239.129

* Rebuilt URL to: https://10.9.239.129/

* Hostname was NOT found in DNS cache

*   Trying 10.9.239.129...

* Connected to 10.9.239.129 (10.9.239.129) port 443 (#0)

* successfully set certificate verify locations:

*   CAfile: none

  CApath: /etc/ssl/certs

* SSLv3, TLS handshake, Client hello (1):

* SSLv3, TLS handshake, Server hello (2):

* SSLv3, TLS handshake, CERT (11):

* SSLv3, TLS handshake, Server key exchange (12):

* SSLv3, TLS handshake, Server finished (14):

* SSLv3, TLS handshake, Client key exchange (16):

* SSLv3, TLS change cipher, Client hello (1):

* SSLv3, TLS handshake, Finished (20):

* SSLv3, TLS change cipher, Client hello (1):

* SSLv3, TLS handshake, Finished (20):

* SSL connection using ECDHE-RSA-AES128-SHA256

* Server certificate:

*  subject: CN=S1WPVJSAN02.US1.autonation.com

*  start date: 2016-01-28 15:48:48 GMT

*  expire date: 2021-01-28 15:48:48 GMT

*  issuer: CN=S1WPVJSAN02.US1.autonation.com

*  SSL certificate verify result: self signed certificate (18), continuing anyway.

> GET / HTTP/1.1

> User-Agent: curl/7.35.0

> Host: 10.9.239.129

> Accept: */*

< HTTP/1.1 301 Moved Permanently

< Cache-Control: no-cache, no-store, must-revalidate

< Pragma: no-cache

< Expires: 0

< Location: /um/?redirectUrl=/

< Date: Fri, 30 Sep 2016 14:46:46 GMT

< Connection: keep-alive

< Transfer-Encoding: chunked

* Connection #0 to host 10.9.239.129 left intact

root@s1lsnvcp1:/opt/netapp-harvest#

 

----------------------------------------------------------------------------------------

 

Poller status:

 

root@s1lsnvcp1:/opt/netapp-harvest/log# service netapp-harvest status
STATUS          POLLER               SITE
############### #################### ##################
[RUNNING]       S1W8040CTL01         Denver
[RUNNING]       S1W8040CTL02         Denver
[RUNNING]       S1WCLUST01           Denver
[RUNNING]       s1wclust01           Denver

 

Log Output:

 

root@s1lsnvcp1:/opt/netapp-harvest/log# cat s1wclust01_netapp-harvest.log
[2016-09-27 12:14:54] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: s1wclust01]
[2016-09-27 12:14:54] [WARNING] Started in foreground mode; messages to STDERR are redirected to the logfile and                         are not visible on the console.
[2016-09-27 12:14:54] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-09-27 12:14:54] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] a                        nd password [**********]
[2016-09-27 12:14:55] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.3] successful.
[2016-09-27 12:14:55] [NORMAL ] [main] Using best-fit collection template: [ocum-6.3.0.conf]
[2016-09-27 12:14:55] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1w                        clust01]
[2016-09-27 12:14:55] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.s1wclust01                        ]
[2016-09-27 12:14:55] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.
[2016-09-27 12:15:58] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: s1wclust01]
[2016-09-27 12:15:58] [WARNING] Started in foreground mode; messages to STDERR are redirected to the logfile and                         are not visible on the console.
[2016-09-27 12:15:58] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-09-27 12:15:58] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] a                        nd password [**********]
[2016-09-27 12:15:58] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.3] successful.
[2016-09-27 12:15:58] [NORMAL ] [main] Using best-fit collection template: [ocum-6.3.0.conf]
[2016-09-27 12:15:58] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1w                        clust01]
[2016-09-27 12:15:58] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.s1wclust01                        ]
[2016-09-27 12:15:58] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.
[2016-09-27 12:16:37] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: s1wclust01]
[2016-09-27 12:16:37] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-09-27 12:16:37] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] a                        nd password [**********]
[2016-09-27 12:16:37] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.3] successful.
[2016-09-27 12:16:37] [NORMAL ] [main] Using best-fit collection template: [ocum-6.3.0.conf]
[2016-09-27 12:16:37] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1w                        clust01]
[2016-09-27 12:16:37] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.s1wclust01                        ]
[2016-09-27 12:16:37] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.
[2016-09-27 12:58:36] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: s1wclust01]
[2016-09-27 12:58:37] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-09-27 12:58:37] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] a                        nd password [**********]
[2016-09-27 12:58:37] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.3] successful.
[2016-09-27 12:58:37] [NORMAL ] [main] Using best-fit collection template: [ocum-6.3.0.conf]
[2016-09-27 12:58:37] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1w                        clust01]
[2016-09-27 12:58:37] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.s1wclust01                        ]
[2016-09-27 12:58:37] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.

 

----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------

 

netapp-harvest.conf output:

 


##
## Monitored host examples - Use one section like the below for each monitored host
##

#====== 7DOT (node) or cDOT (cluster LIF) for performance info ================
#
[S1WCLUST01]
hostname       = 10.9.220.64
site           = Denver

[S1W8040CTL01]
hostname        = 10.9.219.63
site            = Denver


[S1W8040CTL02]
hostname        = 10.9.219.65
site            = Denver

#====== OnCommand Unified Manager (OCUM) for cDOT capacity info ===============
#
[s1wclust01]
hostname          = 10.9.239.129
site              = Denver
host_type         = OCUM
data_update_freq  = 900
normalized_xfer   = gb_per_sec

 

 

Thank you in advance..

1 ACCEPTED SOLUTION

rcasero
6,684 Views

Good morning Chris, and want to thank you for your support on this, but was just checking my capacity and on some screens, I am pulling data under the Volume Dashboard from Harvest.

 

 

I will be poking around...

 

Thanks to everyone.

 

View solution in original post

8 REPLIES 8

Jeff_Yao
6,901 Views

maybe open a case?

rcasero
6,827 Views

Good point, I thought NetApp would not support this, as is open source. But worth a try.

madden
6,846 Views

Hi @rcasero

 

Here are two examples from your config:

 

 

 

[S1WCLUST01]
hostname       = 10.9.220.64
site           = Denver

[s1wclust01]
hostname          = 10.9.239.129
site              = Denver
host_type         = OCUM
data_update_freq  = 900
normalized_xfer   = gb_per_sec

 

Inside the [ ] should be the cluster name (ONTAP) or hostname (OCUM) and it is case sensitive.  So my guess is the cluster name should be in lower case (check the CLI prompt of your cluster) and the hostname of the OCUM server is incorrect (here it matches the cluster name, but lower case).  The OCUM poller submits metrics for each cluster known to OCUM that has a corresponding poller entry, but if your case is different then it won't match and you get no capacity metrics.

 

If you start the poller with -v option I think you should see some WARNING messages that would hint to this problem.

 

Because your perf metrics are flowing in OK as-is to update the cluster name to be lower case and allow OCUM to work do the following:

1) Stop the poller "service netapp-harvest stop"

2) For each cluster that you need to update rename the clustername directories on the Graphite server with something like
"mv /opt/graphite/storage/whisper/netapp/perf/Denver/S1WCLUST01 /opt/graphite/storage/whisper/netapp/perf/Denver/s1wclust01"

and

"mv /opt/graphite/storage/whisper/netapp/poller/perf/Denver/S1WCLUST01 /opt/graphite/storage/whisper/netapp/poller/perf/Denver/s1wclust01"

3) Edit the netapp-harvest.conf file and correct the names to lower case for the ONTAP systems

4) Start the poller "service netapp-harvest start"

 

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

rcasero
6,834 Views

Hi Chris, thank you for your response.

 

I made the cahnges and did not work, see below for the output of the -v when starting the poller.

 

root@s1lsnvcp1:/var/lib/graphite/whisper/netapp/poller/perf/Denver# service netapp-harvest start
STATUS          POLLER               SITE
############### #################### ##################
[STARTED]       S1W8040CTL01         Denver
[STARTED]       S1W8040CTL02         Denver
[STARTED]       s1wclust01           Denver
root@s1lsnvcp1:/var/lib/graphite/whisper/netapp/poller/perf/Denver# service netapp-harvest status
STATUS          POLLER               SITE
############### #################### ##################
[RUNNING]       S1W8040CTL01         Denver
[RUNNING]       S1W8040CTL02         Denver
[NOT RUNNING]   s1wclust01           Denver
root@s1lsnvcp1:/var/lib/graphite/whisper/netapp/poller/perf/Denver# /opt/netapp-harvest/netapp-worker -poller s1wclust01 -v
[2016-10-11 13:14:01] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: s1wclust01]
[2016-10-11 13:14:01] [WARNING] Started in foreground mode; messages to STDERR are redirected to the logfile and are not visible on the console.
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [19] is Section [global]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [20] in Section [global] has Key/Value pair [grafana_api_key]=[eyJrIjoiSG9GRG5MMTBlU1h4SzA5Ym1sZ09tWklPYlk0Q1ZCV0giLCJuIjoiTmV0QXBwLUhhcnZlc3QiLCJpZCI6MX0=]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [21] in Section [global] has Key/Value pair [grafana_url]=[https://10.9.221.8:443]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [22] in Section [global] has Key/Value pair [grafana_dl_tag]=[]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [28] is Section [default]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [30] in Section [default] has Key/Value pair [graphite_enabled]=[1]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [31] in Section [default] has Key/Value pair [graphite_server]=[10.9.221.8]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [32] in Section [default] has Key/Value pair [graphite_port]=[2003]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [33] in Section [default] has Key/Value pair [graphite_proto]=[tcp]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [34] in Section [default] has Key/Value pair [normalized_xfer]=[mb_per_sec]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [35] in Section [default] has Key/Value pair [normalized_time]=[millisec]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [36] in Section [default] has Key/Value pair [graphite_root]=[default]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [37] in Section [default] has Key/Value pair [graphite_meta_metrics_root]=[default]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [40] in Section [default] has Key/Value pair [host_type]=[FILER]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [41] in Section [default] has Key/Value pair [host_port]=[443]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [42] in Section [default] has Key/Value pair [host_enabled]=[1]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [43] in Section [default] has Key/Value pair [template]=[default]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [44] in Section [default] has Key/Value pair [data_update_freq]=[60]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [45] in Section [default] has Key/Value pair [ntap_autosupport]=[0]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [46] in Section [default] has Key/Value pair [latency_io_reqd]=[10]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [47] in Section [default] has Key/Value pair [auth_type]=[password]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [48] in Section [default] has Key/Value pair [username]=[netapp-harvest]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [49] in Section [default] has Key/Value pair [password]=[**********]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [50] in Section [default] has Key/Value pair [ssl_cert]=[INSERT_PEM_FILE_NAME_HERE]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [51] in Section [default] has Key/Value pair [ssl_key]=[INSERT_KEY_FILE_NAME_HERE]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [60] is Section [s1wclust01]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [61] in Section [s1wclust01] has Key/Value pair [hostname]=[10.9.220.64]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [62] in Section [s1wclust01] has Key/Value pair [site]=[Denver]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [64] is Section [S1W8040CTL01]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [65] in Section [S1W8040CTL01] has Key/Value pair [hostname]=[10.9.219.63]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [66] in Section [S1W8040CTL01] has Key/Value pair [site]=[Denver]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [69] is Section [S1W8040CTL02]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [70] in Section [S1W8040CTL02] has Key/Value pair [hostname]=[10.9.219.65]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [71] in Section [S1W8040CTL02] has Key/Value pair [site]=[Denver]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [75] is Section [s1wclust01]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [76] in Section [s1wclust01] has Key/Value pair [hostname]=[10.9.239.129]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [77] in Section [s1wclust01] has Key/Value pair [site]=[Denver]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [78] in Section [s1wclust01] has Key/Value pair [host_type]=[OCUM]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [79] in Section [s1wclust01] has Key/Value pair [data_update_freq]=[900]
[2016-10-11 13:14:01] [DEBUG  ] [conf] Line [80] in Section [s1wclust01] has Key/Value pair [normalized_xfer]=[gb_per_sec]
[2016-10-11 13:14:01] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-10-11 13:14:01] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2016-10-11 13:14:01] [DEBUG  ] [connect] Using HTTP/1.0 for communication (either set earlier or only version supported by SDK).
[2016-10-11 13:14:01] [DEBUG  ] [sysinfo] Updating system-info cache
[2016-10-11 13:14:01] [DEBUG  ] [sysinfo] Discovered [s1wclust01] on OCUM server and found conf section with site [Denver]
[2016-10-11 13:14:01] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.4P2] successful.
[2016-10-11 13:14:01] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.3.0.conf (6, 3, 0)
[2016-10-11 13:14:01] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.1.0.conf (6, 1, 0)
[2016-10-11 13:14:01] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.2.0.conf (6, 2, 0)
[2016-10-11 13:14:01] [ERROR  ] [main] No best-fit collection template found (same generation and major release, minor same or less) found in [/opt/netapp-harvest/template/default].  Exiting;

 

 

Seems that, even though I rename the files to lower-case it puts it back in...

 

Please help...

madden
6,818 Views

Hi @rcasero

 

 

It looks like you now have two pollers (i.e. the name inside the [ ] ) defined called s1wclust01, one at line 60 and another at line 75.  Poller names must be unique so this won't work as expected.  I think you need to rename the OCUM poller entry (line 75) to be the actual hostname of your OCUM server, which is presumably different than the cluster name.

 

I also see that you upgraded your OCUM server in the meantime.  The Harvest version on the toolchest (v1.2.2) doesn't support this version yet natively, but there is an easy workaround:

http://community.netapp.com/t5/OnCommand-Storage-Management-Software-Discussions/Harvest-amp-OCUM6-4RC1-gt-No-Template-found/m-p/118397/highlight/true...

 

So my advice:

 

1) Stop harvest "service netapp-harvest stop"

2) Update your netapp-harvest.conf file so that the poller names are unique and accurate to the cluster name and the OCUM hostname

3) Copy a OCUM 6.4 template in place: "cp /opt/netapp-harvest/template/default/ocum-6.3.0.conf /opt/harvest-harvest/template/default/ocum-6.4.0.conf"

4) Start harvest "service netapp-harvest start"

 

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

rcasero
6,806 Views

Ok looks a little better, still have not seen and data from the volume, but i did notice even after I renamed the poller to lower-case as you mentioned once I restart the netapp-harvest it readded it in upper-case.. Here is the output.

 

oot@s1lsnvcp1:/var/lib/graphite/whisper/netapp/poller/perf/Denver# ll
total 16
drwxr-xr-x  4 _graphite _graphite 4096 Oct 11 14:57 ./
drwxr-xr-x  3 _graphite _graphite 4096 Oct 10 10:38 ../
drwxr-xr-x 17 _graphite _graphite 4096 Oct 11 13:21 s1wclust01/
drwxr-xr-x 13 _graphite _graphite 4096 Oct 11 15:00 S1WCLUST01/
root@s1lsnvcp1:/var/lib/graphite/whisper/netapp/poller/perf/Denver# /opt/netapp-harvest/netapp-worker -poller S1WPVJSAN02 -v
[2016-10-11 15:01:52] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: S1WPVJSAN02]
[2016-10-11 15:01:52] [WARNING] Started in foreground mode; messages to STDERR are redirected to the logfile and are not visible on the console.
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [19] is Section [global]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [20] in Section [global] has Key/Value pair [grafana_api_key]=[eyJrIjoiSG9GRG5MMTBlU1h4SzA5Ym1sZ09tWklPYlk0Q1ZCV0giLCJuIjoiTmV0QXBwLUhhcnZlc3QiLCJpZCI6MX0=]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [21] in Section [global] has Key/Value pair [grafana_url]=[https://10.9.221.8:443]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [22] in Section [global] has Key/Value pair [grafana_dl_tag]=[]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [28] is Section [default]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [30] in Section [default] has Key/Value pair [graphite_enabled]=[1]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [31] in Section [default] has Key/Value pair [graphite_server]=[10.9.221.8]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [32] in Section [default] has Key/Value pair [graphite_port]=[2003]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [33] in Section [default] has Key/Value pair [graphite_proto]=[tcp]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [34] in Section [default] has Key/Value pair [normalized_xfer]=[mb_per_sec]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [35] in Section [default] has Key/Value pair [normalized_time]=[millisec]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [36] in Section [default] has Key/Value pair [graphite_root]=[default]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [37] in Section [default] has Key/Value pair [graphite_meta_metrics_root]=[default]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [40] in Section [default] has Key/Value pair [host_type]=[FILER]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [41] in Section [default] has Key/Value pair [host_port]=[443]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [42] in Section [default] has Key/Value pair [host_enabled]=[1]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [43] in Section [default] has Key/Value pair [template]=[default]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [44] in Section [default] has Key/Value pair [data_update_freq]=[60]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [45] in Section [default] has Key/Value pair [ntap_autosupport]=[0]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [46] in Section [default] has Key/Value pair [latency_io_reqd]=[10]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [47] in Section [default] has Key/Value pair [auth_type]=[password]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [48] in Section [default] has Key/Value pair [username]=[netapp-harvest]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [49] in Section [default] has Key/Value pair [password]=[**********]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [50] in Section [default] has Key/Value pair [ssl_cert]=[INSERT_PEM_FILE_NAME_HERE]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [51] in Section [default] has Key/Value pair [ssl_key]=[INSERT_KEY_FILE_NAME_HERE]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [60] is Section [s1wclust01]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [61] in Section [s1wclust01] has Key/Value pair [hostname]=[10.9.220.64]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [62] in Section [s1wclust01] has Key/Value pair [site]=[Denver]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [64] is Section [S1W8040CTL01]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [65] in Section [S1W8040CTL01] has Key/Value pair [hostname]=[10.9.219.63]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [66] in Section [S1W8040CTL01] has Key/Value pair [site]=[Denver]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [69] is Section [S1W8040CTL02]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [70] in Section [S1W8040CTL02] has Key/Value pair [hostname]=[10.9.219.65]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [71] in Section [S1W8040CTL02] has Key/Value pair [site]=[Denver]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [75] is Section [S1WPVJSAN02]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [76] in Section [S1WPVJSAN02] has Key/Value pair [hostname]=[10.9.239.129]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [77] in Section [S1WPVJSAN02] has Key/Value pair [site]=[Denver]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [78] in Section [S1WPVJSAN02] has Key/Value pair [host_type]=[OCUM]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [79] in Section [S1WPVJSAN02] has Key/Value pair [data_update_freq]=[900]
[2016-10-11 15:01:52] [DEBUG  ] [conf] Line [80] in Section [S1WPVJSAN02] has Key/Value pair [normalized_xfer]=[gb_per_sec]
[2016-10-11 15:01:52] [NORMAL ] [main] Poller will monitor a [OCUM] at [10.9.239.129:443]
[2016-10-11 15:01:52] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2016-10-11 15:01:53] [DEBUG  ] [connect] Using HTTP/1.0 for communication (either set earlier or only version supported by SDK).
[2016-10-11 15:01:53] [DEBUG  ] [sysinfo] Updating system-info cache
[2016-10-11 15:01:53] [DEBUG  ] [sysinfo] Discovered [s1wclust01] on OCUM server and found conf section with site [Denver]
[2016-10-11 15:01:53] [NORMAL ] [main] Collection of system info from [10.9.239.129] running [6.4P2] successful.
[2016-10-11 15:01:53] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.3.0.conf (6, 3, 0)
[2016-10-11 15:01:53] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.4.0.conf (6, 4, 0)
[2016-10-11 15:01:53] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.1.0.conf (6, 1, 0)
[2016-10-11 15:01:53] [DEBUG  ] [main] Found possible default template file for product [ocum]: ocum-6.2.0.conf (6, 2, 0)
[2016-10-11 15:01:53] [NORMAL ] [main] Using best-fit collection template: [ocum-6.4.0.conf]
[2016-10-11 15:01:53] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1wclust01]
[2016-10-11 15:01:53] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.S1WPVJSAN02]
[2016-10-11 15:01:53] [DEBUG  ] [volume] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [DEBUG  ] [qtree] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [DEBUG  ] [lun] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [DEBUG  ] [aggregate] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.
[2016-10-11 15:01:53] [DEBUG  ] Sleeping [787] seconds

madden
6,781 Views

Hi @rcasero

 

From your logs:

 

[2016-10-11 15:01:53] [NORMAL ] [main] Using best-fit collection template: [ocum-6.4.0.conf]
[2016-10-11 15:01:53] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.Denver.s1wclust01] for host [s1wclust01]
[2016-10-11 15:01:53] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.Denver.S1WPVJSAN02]
[2016-10-11 15:01:53] [DEBUG  ] [volume] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [DEBUG  ] [qtree] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [DEBUG  ] [lun] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [DEBUG  ] [aggregate] data-list poller first poll at [2016-10-11 15:15:00]
[2016-10-11 15:01:53] [NORMAL ] [main] Startup complete.  Polling for new data every [900] seconds.
[2016-10-11 15:01:53] [DEBUG  ] Sleeping [787] seconds

So it was able to find the 6.4 collection template and it calculated a metrics root for s1wclust01.  The first metrics would be submitted at 15:15 and I bet it worked now 🙂

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

 

 

rcasero
6,685 Views

Good morning Chris, and want to thank you for your support on this, but was just checking my capacity and on some screens, I am pulling data under the Volume Dashboard from Harvest.

 

 

I will be poking around...

 

Thanks to everyone.

 

Public