Active IQ Unified Manager Discussions

NetApp Harvest Grafana showing only 7 days of metrics

krishgudd
5,995 Views

Hi,

 

Grafana is showing only 7 days of metrics only even after cofniguring the rention more than 7 days. Not sure what is the issue with the configuration.

 

[netapp.capacity]
pattern = ^netapp\.capacity\.*
retentions = 15m:60d, 1d:1y
[netapp.poller.capacity]
pattern = ^netapp\.poller\.capacity\.*
retentions = 15m:60d, 1d:1y
[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y
[netapp.poller.perf]
pattern = ^netapp\.poller\.perf\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y
[netapp.perf7]
pattern = ^netapp\.perf7\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y
[netapp.poller.perf7]
pattern = ^netapp\.poller\.perf7\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y

[carbon]
pattern = ^carbon\.
retentions = 60:90d

#[default_1min_for_1day]
#pattern = .*
#retentions = 60s:1d

 

Thanks in Advance,

Krish

8 REPLIES 8

madden
5,930 Views

Hi @krishgudd,

 

The frequency and retention settings of a database file are set the first time the metric arrives.  The file you shared is correct for a much long retention but is it possible that you modified this file after Harvest was already sending metrics? The answer in this post is a more detailed answer on how to change the retention if this is the case.  

 

You can also check the retention of existing database files using the utility "whisper-info.py <yourmetricname.wsp>" (or whisper-info on some distributions).  The output of this command shows frequency and number of datapoints that are retained.  This will tell if you if 7 days are to be retained.  The other place you can check are carbon create.log files (Ubuntu:  /var/log/carbon  RHEL: /opt/graphite/storage/log/carbon-cache/carbon-cache-a) to find the retention used in new creates. Again, here you can check what retention is being used for new creates.

 

Hope this helps!

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

krishgudd
5,868 Views

Hi Chris,

 

Thanks for your response. I have followed your instructions and i need to wait for few days to confirm whether the applied settings are correct or not.

 

Apart from that i didn't understand whether the retention is in which format. Is it hours/minutes/days

 

maxretention=604800 means what ? 

 

 carbon-cache-a]# /usr/bin/whisper-info.py /opt/graphite/storage/whisper/netapp/perf/COS/csntap03k/svm/cosnas1/vol/v32/write_latency.wsp

maxRetention: 604800
xFilesFactor: 0.5
aggregationMethod: average
fileSize: 120988

Archive 0
retention: 604800
secondsPerPoint: 60
points: 10080
size: 120960
offset: 28

 

Thanks in Advance,

Krishgudd

madden
5,839 Views

Hi @krishgudd

 

Using the info you shared:

 

 

carbon-cache-a]# /usr/bin/whisper-info.py /opt/graphite/storage/whisper/netapp/perf/COS/csntap03k/svm/cosnas1/vol/v32/write_latency.wsp
maxRetention: 604800
xFilesFactor: 0.5
aggregationMethod: average
fileSize: 120988
Archive 0 retention: 604800 secondsPerPoint: 60 points: 10080 size: 120960 offset: 28

 

The archive 0 has retention of 604800 seconds which is 7 days.  Further the data points are expected every 60 seconds and you will store 10080 of them in total before they wrap around.

 

You storage-schemas.conf file looks ok now so it must have been different when you first setup Harvest.  You should check your create.log file (see earlier response from me) to see the retention of newly created metrics.

 

To extend retention of files that only have 7 day retention see the link in the earlier response from me which tells how to use the whisper-resize command to resolve.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

krishgudd
5,775 Views

Hi Chris,

 

Even after following the mentioned steps,still i can see only 7 days of metrics.

 

Followed STEP A from your artcile.

 

1) Stopped all Services

2) Deleted the directories content.

3) started the services.

 

Current Output:

 

 fcp]# /usr/bin/whisper-info.py avg_latency.wsp
maxRetention: 604800
xFilesFactor: 0.5
aggregationMethod: average
fileSize: 120988

Archive 0
retention: 604800
secondsPerPoint: 60
points: 10080
size: 120960
offset: 28

 

Thanks in Advance,

Krishgudd

 

madden
5,766 Views

Hi @krishgudd

 

Nope, you still have 7 days retention.  Can you paste here the output of these commands:

 

cat /opt/graphite/conf/storage-schemas.conf
cat /etc/carbon/storage-schemas.conf  

 

I'm thinking either there is an entry in your file at the beginning that is setting them to 7 days or maybe you have multiple files and updated one that is not actually referenced by carbon-cache.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

krishgudd
5,751 Views

Hi Chris,

 

# cat /opt/graphite/conf/storage-schemas.conf
# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
#  [name]
#  pattern = regex
#  retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...

# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings

[netapp.capacity]
pattern = ^netapp\.capacity\.*
retentions = 15m:60d, 1d:1y
[netapp.poller.capacity]
pattern = ^netapp\.poller\.capacity\.*
retentions = 15m:60d, 1d:1y
[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y
[netapp.poller.perf]
pattern = ^netapp\.poller\.perf\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y
[netapp.perf7]
pattern = ^netapp\.perf7\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y
[netapp.poller.perf7]
pattern = ^netapp\.poller\.perf7\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y

[carbon]
pattern = ^carbon\.
retentions = 60:90d

#[default_1min_for_1day]
#pattern = .*
#retentions = 60s:1d


cat /etc/carbon/storage-schemas.conf
cat: /etc/carbon/storage-schemas.conf: No such file or directory

madden
5,736 Views

Hi @krishgudd

 

I replaced my storage-schemas.conf with yours and also saw 7 day retention for newly created files.  Checking your rules in more detail I see you have two archives with the same configured retention (highlighted in red and green) which is not supported by Graphite and is likely to blame:

 

 

[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:30d, 5m:30d, 15m:60d, 1h:1y

 

Here is a file that has similar retentions (to give smaller filesizes) but has supported retentions:

 

 

[netapp_perf] 
pattern = ^netapp(\.poller)?\.perf7?\. 
retentions = 1m:35d,15m:60d,1h:1y 
 
[netapp_capacity] 
pattern = ^netapp(\.poller)?\.capacity\. 
retentions = 15m:60d,1d:5y 

[carbon]
pattern = ^carbon\.
retentions = 1m:90d

[default_1min_for_1day]
pattern = .*
retentions = 1m:1d

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

krishgudd
5,725 Views

Now its seems to be perfect.

 

cifs]# /usr/bin/whisper-info.py cifs_write_ops.wsp
maxRetention: 31536000
xFilesFactor: 0.5
aggregationMethod: average
fileSize: 779092

Archive 0
retention: 3024000
secondsPerPoint: 60
points: 50400
size: 604800
offset: 52

Archive 1
retention: 5184000
secondsPerPoint: 900
points: 5760
size: 69120
offset: 604852

Archive 2
retention: 31536000
secondsPerPoint: 3600
points: 8760
size: 105120
offset: 673972

 

 

So carbon/Graphite will not allow dynamic resizing apart from your PLAN B procedure.

Public