Active IQ Unified Manager Discussions

netapp-harvest not respecting desired retention policies

Adam-Gross
3,959 Views

My /etc/carbon/storage-schemas.conf file looks like this:

 

----------/

# Schema definitions for Whisper files. Entries are scanned in order,
# and first match wins. This file is scanned for changes every 60 seconds.
#
# [name]
# pattern = regex
# retentions = timePerPoint:timeToStore, timePerPoint:timeToStore, ...

# Carbon's internal metrics. This entry should match what is specified in
# CARBON_METRIC_PREFIX and CARBON_METRIC_INTERVAL settings
[netapp.capacity]
pattern = ^netapp\.capacity\.*
retentions = 15m:100d, 1d:5y

[netapp.poller.capacity]
pattern = ^netapp\.poller\.capacity\.*
retentions = 15m:100d, 1d:5y

[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

[netapp.poller.perf]
pattern = ^netapp\.poller\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

[netapp.perf7]
pattern = ^netapp\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

[netapp.poller.perf7]
pattern = ^netapp\.poller\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

# Default retention policies
#
#[carbon]
#pattern = ^carbon\.
#retentions = 60:90d
#
#[default_1min_for_1day]
#pattern = .*
#retentions = 60s:1d

/----------

 

Despite this, for some reason, the in force policy is [default_1min_for_1day]. I've restarted services and even gone so far as to reboot the server. Data still rolls off every 24 hours. Any help would be greatly appreciated. Thanks in advance!

1 ACCEPTED SOLUTION

madden
3,927 Views

Hi @Adam-Gross,

 

Two ideas:

 

1) The storage-schemas.conf file is only consulted the 1st time a metric is received and causes the creation of a .wsp file with the specified schema.  Updating storage-schemas.conf will cause future created metrics to use new settings but the ones already created retain their original settings.  See this post for some techniques to update to a different retention.  You can also use "whisper-info.py filename.wsp", or "whisper-info filename.wsp" to see the retention archives of an existing file.

 

2) You have carbon installed more than once and the storage-schemas.conf file in /etc/carbon isn't the right one.  If you installed from source the default location is /opt/graphite/conf/storage-schemas.conf.  If you check the carbon logfile "creates.log" you can also see which storage-schemas entry is matching for a net metric create.

 

 

Cheers,
Chris Madden

Storage Architect, NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO

View solution in original post

2 REPLIES 2

madden
3,928 Views

Hi @Adam-Gross,

 

Two ideas:

 

1) The storage-schemas.conf file is only consulted the 1st time a metric is received and causes the creation of a .wsp file with the specified schema.  Updating storage-schemas.conf will cause future created metrics to use new settings but the ones already created retain their original settings.  See this post for some techniques to update to a different retention.  You can also use "whisper-info.py filename.wsp", or "whisper-info filename.wsp" to see the retention archives of an existing file.

 

2) You have carbon installed more than once and the storage-schemas.conf file in /etc/carbon isn't the right one.  If you installed from source the default location is /opt/graphite/conf/storage-schemas.conf.  If you check the carbon logfile "creates.log" you can also see which storage-schemas entry is matching for a net metric create.

 

 

Cheers,
Chris Madden

Storage Architect, NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO

Adam-Gross
3,916 Views

Shutting everything down, removing the whisper folders, and firing everything back up recreated files of the expected size. Thank you very much!

 

-Adam

Public