Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
It looks like everything is setup correctly - but I'm not seeing any data in Graphite (and thus, nothing in Grafana).
Some of the log files:
[2015-12-28 15:10:14] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: IP ADDRESS OF OCUM SERVER]
[2015-12-28 15:10:14] [NORMAL ] [main] Poller will monitor a [OCUM] at [IP ADDRESS OF OCUM SERVER:443]
[2015-12-28 15:10:14] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2015-12-28 15:10:14] [WARNING] [connect] Setting HTTP/1.0 because reverse hostname resolution (IP -> hostname) fails. To enable HTTP/1.1 ensure reverse hostname resolution succeeds.
[2015-12-28 15:10:15] [NORMAL ] [main] Collection of system info from [IP ADDRESS OF OCUM SERVER] running [6.1R1] successful.
[2015-12-28 15:10:15] [NORMAL ] [main] Using best-fit collection template: [ocum-6.1.0.conf]
[2015-12-28 15:10:15] [NORMAL ] [main] Calculated graphite_root [netapp.capacity.CLUSTERNAME.CLUSTERNAME] for host [CLUSTERNAME]
[2015-12-28 15:10:15] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.capacity.SGAU.IP ADDRESS OF OCUM SERVER]
[2015-12-28 15:10:15] [NORMAL ] [main] Startup complete. Polling for new data every [900] seconds.
[2015-12-28 15:10:15] [NORMAL ] WORKER STARTED [Version: 1.2.2] [Conf: netapp-harvest.conf] [Poller: CLUSTER NAME]
[2015-12-28 15:10:15] [NORMAL ] [main] Poller will monitor a [FILER] at [CLUSTER IP ADDRESS:443]
[2015-12-28 15:10:15] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2015-12-28 15:10:15] [NORMAL ] [main] Collection of system info from [CLUSTER IP ADDRESS] running [NetApp Release 8.2.3P6 Cluster-Mode] successful.
[2015-12-28 15:10:15] [NORMAL ] [main] Using best-fit collection template: [cdot-8.2.0.conf]
[2015-12-28 15:10:15] [NORMAL ] [main] Using graphite_root [netapp.perf.CLUSTER NAME.CLUSTER NAME]
[2015-12-28 15:10:15] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.perf.CLUSTER NAME.CLUSTER NAME]
[2015-12-28 15:10:15] [NORMAL ] [smb2:vserver] Collection of object not enabled; skipping
[2015-12-28 15:10:15] [NORMAL ] [smb2:node] Collection of object not enabled; skipping
[2015-12-28 15:10:15] [NORMAL ] [main] Startup complete. Polling for new data every [60] seconds.
storage-schemas.conf:
# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...
# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
#[carbon]
#pattern = ^carbon\.
#retentions = 60:90d
[netapp.capacity]
pattern = ^netapp\.capacity\.*
retentions = 15m:100d, 1d:5y
[netapp.poller.capacity]
pattern = ^netapp\.poller\.capacity\.*
retentions = 15m:100d, 1d:5y
[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
[netapp.poller.perf]
pattern = ^netapp\.poller\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
[netapp.perf7]
pattern = ^netapp\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
[netapp.poller.perf7]
pattern = ^netapp\.poller\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
#[default_1min_for_1day]
#pattern = .*
#retentions = 60s:1d
Thanks for any help with this
Hi @fondue1
Indeed your Harvest logs looks ok (although maybe setup reverse IP resolution for the OCUM server); in normal operation after startup only a status message is logged every 4 hrs and otherwise the log should be quiet. It can take 1 polling period for OCUM, and 2 polling periods for perf, before metrics are sent to Graphite. If Harvest is configured to send to Graphite using TCP (the default) any failures will be logged, so if you see none than something, presumably Graphite carbon, is listening on the other end and accepting data.
My guess is something on your Graphite server is to blame like a full disk or permissions on your data directory.
Did you check the Graphite carbon logs? Specifically the creates.log should be useful.
Ubuntu default package: /var/log/carbon
RHEL (or any installed from github using defaults): /opt/graphite/storage/log/carbon-cache/carbon-cache-a
Please let us know if you solve and what was at fault.
Cheers,
Chris Madden
Storage Architect, NetApp EMEA (and author of Harvest)
Blog: It all begins with data
P.S. Please select “Options” and then “Accept as Solution” if this response answered your question so that others will find it easily!
I figured it out. During configuration I mistook one of the conf files port numbers to be asking for the graphite web port (81) instead of the default 2003 (or 2001, I can't remember). I literally just fixed this. Thanks for your help though!