Active IQ Unified Manager Discussions

Graphite Only Shows 35 Days of Data

Michael_Orr
2,754 Views

Grafana only shows 35 days of data. I have replicated this in the native Graphite GUI. My first thought was that this was a problem in the storage-schemas.conf file, but I have visually verified that it was correct, and I have run whisper-info.py against the files to verify that they are correct.

 

storage-schemas.conf:

[netapp.capacity]
pattern = ^netapp\.capacity\.*
retentions = 15m:100d, 1d:5y

[netapp.poller.capacity]
pattern = ^netapp\.poller\.capacity\.*
retentions = 15m:100d, 1d:5y

[netapp.perf]
pattern = ^netapp\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

[netapp.poller.perf]
pattern = ^netapp\.poller\.perf\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

[netapp.perf7]
pattern = ^netapp\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y

[netapp.poller.perf7]
pattern = ^netapp\.poller\.perf7\.*
retentions = 60s:35d, 5m:100d, 15m:395d, 1h:5y
#
# This MUST be last.
#
[default_1min_for_7days]
pattern = .*
retentions = 60s:7d

 

whisper-info.py:

whisper-info.py /opt/graphite/storage/whisper/netapp/perf/QTS/p-nacl01/svm/bb-prod-vs2/vol/lin_bb_prod_content/total_data.wsp
maxRetention: 157680000
xFilesFactor: 0.5
aggregationMethod: average
fileSize: 1931104

Archive 0
retention: 3024000
secondsPerPoint: 60
points: 50400
size: 604800
offset: 64

Archive 1
retention: 8640000
secondsPerPoint: 300
points: 28800
size: 345600
offset: 604864

Archive 2
retention: 34128000
secondsPerPoint: 900
points: 37920
size: 455040
offset: 950464

Archive 3
retention: 157680000
secondsPerPoint: 3600
points: 43800
size: 525600
offset: 1405504

 

I also see where the files appear to be created correctly in the creates.log:

 

14/08/2018 12:36:34 :: creating database file /opt/graphite/storage/whisper/netapp/perf/QT
S/p-nacl01/svm/quinn-np-vs2/vol/win_fv01213_dssnsql03_snapinfo/other_ops.wsp (archive=[(60
, 50400), (300, 28800), (900, 37920), (3600, 43800)] xff=0.5 agg=average)

 

I am at a loss.

1 ACCEPTED SOLUTION

Michael_Orr
2,601 Views

I found the problem & am replying to my own post for archival search purposes. Because the sheer number of items polled from our Production cluster was causing polling misses, I had changed the "data_update_freq" in /opt/netapp-harvest/netapp-harvest.conf to 180 seconds. I assume because this was not an even multiple of 5 minutes, the longer term retention in the whisper database files for this cluster was not getting generated properly. I let the cluster default back to 1 minute polling intervals, and am eating the missed polls. The grafana graphs now work correctly, though.

View solution in original post

1 REPLY 1

Michael_Orr
2,602 Views

I found the problem & am replying to my own post for archival search purposes. Because the sheer number of items polled from our Production cluster was causing polling misses, I had changed the "data_update_freq" in /opt/netapp-harvest/netapp-harvest.conf to 180 seconds. I assume because this was not an even multiple of 5 minutes, the longer term retention in the whisper database files for this cluster was not getting generated properly. I let the cluster default back to 1 minute polling intervals, and am eating the missed polls. The grafana graphs now work correctly, though.

Public