According to our initial plans, the Alpha package of Harvest 2.0 should have been rolled out long time ago, but we had to delay it several times. The good news is: at the moment we have passed all tests and fixed some final glitches and I'm waiting for my PM's approval to send it out.
Why the delay? We made changes in the architecture several times. Mostly because we want a highly flexible architecture that we don't need to change later again. But also because we had to do some hacking with some collectors and exporters (e.g. with Prometheus, which requires a completely different workflow compared to the other DBs). At the same time we want high performance so you will not need expensive hardware to run Harvest (we have successfully tested running Harvest 2.0 on a single-cpu Raspberry Pi with 500mb ram).
Here are a few more details if you are interested:
Exporters: we have support for three DBs: Prometheus, InfluxDB and Graphite. The new ones allow us to do more with Harvest than before, since they accept not only numeric data, but also labels. Some screenshots of the new dashboards (all Prometheus-based):
Backends: there is an option to use different backends for internal data storage. Python arrays is the default now, but there is the option to use NumPy instead (gives high performance benefits if you are monitoring large clusters, but is not worth the overhead otherwise).
This internal data storage ("Matrix") is actually what makes Harvest highly flexible, since it provides uniform APIs to the different components of Harvest. This means that you can write a plugin or exporter for Harvest, without worrying much about how the data is collected.
Here is a small illustration to give you the idea. There is a collector called Psutil (based on the Python library of the same name), which collects metrics about running processes on the local system. I use it to monitor Harvest itself. But it can also collect things like network traffic and open sockets. I had the idea to visualize all the connections on my laptop. All I had to do is write a plugin with 20 lines of code that does IP lookups and geohashing. And voilà, this is what I get:
User Interface: we are working on an experimental tool for configuring and monitoring Harvest. This tool can pull the list of all available APIs, objects and counters from your cluster and let you choose which ones to collect. If you are interested, I present a preview of the prototype here:
This won't be ready for the Alpha release (otherwise we have to delay it once again).
If you have ideas what we could do more with Harvest, let us know! Btw., you can still join the list of the Alpha-testers if you want. Just drop me a message with your email and company name.
@vachagan_gratian Thanks for the insight and also for sharing the video about the user interface.
It is definitely worth waiting for me, as it solves a lot of the pain points i currently have.
We have a lot of small scripts collecting single counters and write them into our graphite instance, the user interface would make them redundant and much easier to handle. I never really got used to the extension manager...
Will you also provide some Grafana dashboards with this harvest release? If yes for which DB do you plan this?
I assume all DB's is probably too much work. In my opinion one reason why Harvest 1 was so successful was that Chris Madden created a ton of very detailed dashboards which made it easy to have a fast success. After so many years we still use them and barely had to change anything.