Hi All,
I'm having a strange recurring issue where absurdly high metrics are getting placed into my harvest/graphite instance. I don't see any strange messages in the logs for the timeframes where it occurs.
Has anyone seen anything similar? I have two clusters and it is happening with both. I have two screencaps below, the first is a 12 hour view, and the second is a 60 day view. It almost appears the outliers are just getting higher and higher with time? But I'm not sure if that is just some sort of rollup issue with graphite.
I would love to be able to use this data- really the only reason I created the QOS polices was for this purpose, but it's almost impossible to parse with these outliers.
![12hr svm qos policy group 12hr svm qos policy group](https://community.netapp.com/t5/image/serverpage/image-id/7756i987A4EE8AD99DEBC/image-size/original?v=1.0&px=-1)
![60 day history 60 day history](https://community.netapp.com/t5/image/serverpage/image-id/7757i38A829C9D493863E/image-size/original?v=1.0&px=-1)
edit: it looks like I can somewhat get around this issue by using the ''removeAboveValue(100000)" or "removeAbovePercentile(99.6)" functions. However that doesn't negate the fact this erroroneous data is getting placed into graphite.
![workaround workaround](https://community.netapp.com/t5/image/serverpage/image-id/7758i07F0648083C616D6/image-size/original?v=1.0&px=-1)