Hello Srikanth,
We are facing the similar problem , where we have made the required changes in the the netapp-harvest.conf file
[OCUM NAME]
hostname = IP address of the OCUM
site = Site Location
host_type = OCUM
data_update_freq = 900
normalized_xfer = gb_per_sec
template = ocum-opm-hierarchy.conf
graphite_root = netapp-capacity.Clusters.{display_name}
graphite_meta_metrics_root = netapp-capacity-poller.{group}
Additionally we have the same cluster names as of OCUM and cluster identity ,do we need to make any additional changes in order to gather capacity metrics
As i am having OCUM 7.2 in my enviornment and looking for aggregate growth utilization for all the nodes, kindly let me know what additional changes need to be made in order to collect the capacity metrics.
logs:
/opt/netapp-harvest-1.3/log
[2017-09-25 16:45:24] [NORMAL ] WORKER STARTED [Version: 1.3] [Conf: netapp-harvest.conf] [Poller: xxxxxxxx-cls-mgt]
[2017-09-25 16:45:24] [NORMAL ] [main] Poller will monitor a [FILER] at [xx.xx.xx.xx:443]
[2017-09-25 16:45:24] [NORMAL ] [main] Poller will use [password] authentication with username [netapp-harvest] and password [**********]
[2017-09-25 16:45:25] [NORMAL ] [main] Collection of system info from [xx.xx.xx.xx] running [NetApp Release 9.0RC1D4] successful.
[2017-09-25 16:45:25] [NORMAL ] [main] Found best-fit monitoring template (same generation and major release, minor same or less): [cdot-9.0.0.conf]
[2017-09-25 16:45:25] [NORMAL ] [main] Added and/or merged monitoring template [/opt/netapp-harvest-1.3/template/default/cdot-9.0.0.conf]
[2017-09-25 16:45:25] [NORMAL ] [main] Metrics will be submitted with graphite_root [netapp.perf.noida.noiclapa01-cls-mgt]
[2017-09-25 16:45:25] [NORMAL ] [main] Using graphite_meta_metrics_root [netapp.poller.perf.noida.xxxxxxx-cls-mgt
[2017-09-25 16:45:25] [NORMAL ] [main] Startup complete. Polling for new data every [60] seconds.
[2017-09-25 18:27:05] [WARNING] [workload_detail_volume] data-list poller refresh overdue; skipped [1] poll(s) from [2017-09-25 18:28:00] to [2017-09-25 18:28:0
Thanks in advance
Pranjal