Datamigration grafana-graphite into new NABOX installation
3 weeks ago
because an IT audit has several findings about my manually installed Ubuntu-Harvest-Graphite-Grafana installation, I have installed a new instance with NABOX. Not I try to get the historical data of graphite migrated to the NABOX. The "built-in" mechanism will fail, because the source is installed with ubuntu, so there is no "root" User available and the NABOX is using the root-account for connecting to the old installation. If I try to migrate the data in the admin console (with menu  migrate data) and type there "<non-root-User>@<source IP>:/source/source_path" and point to the "whisper" path, I get an error, because "no such file or directory". The path is correct.
Have anyone of you experience to manualls copy/migrate the historical data from the old, manually installed source to the new installed NABOX?
8 REPLIES 8
Copy the data with SCP works fine, but I don't know which data I've to copy?
I've cleared the "graphite data" in the NABOX menu, and then I've copied the content of the whisper directory of the old installation to the cleared whisper directory of the NABOX installation. After looking on the old grafana and the NABOX, I can see historical data in both installations, but in the old one are more counters with historical data. I'm wondering, if there is another directory to copy or if there are different fields in the database, so the NABOX can't use all data provided by the old installation?
I've also used the graphite.db from the old installation in the NABOX, but this doesn't help.
The directory path you need to copy /opt/graphite/storage/whisper/netapp/. I think you got that right and this would give you the exact same data history in your new Nabox. Do you maybe have screenshots from the two Grafana instances?
If I remember correctly, graphite.db contains your Graphite configuration, no need to copy it.
ok, then I will try another copy :-)
I've cleared the graphite data once more, because in the old installation I used uppercase system names and in the NABOX lower case, so I have different counters for the live systems and the historical data.
The path to the whisper directory in the old grafana is "/var/lib/graphite/whisper" with a subdir called netapp and another called carbon. With the last job, I've copied both directories, do I need the carbon directory?
Is it better to add all the new systems in the NABOX before I run the next copy job or doesn't it matter whether there is a directory or not?
I think the carbon folder contains performance metrics of the carbon daemons, you probably don't need those, but it also shouldn't hurt. Having Harvest pollers configured shouldn't affect how Grafana reads your historical data. GL!
after the copy job finished, I've a look at the counters and I'll see the historical data (not all dashboards are tested) of the netapps ... but for about 15 of 39 filer, I will have no current data?! For some systems, the current data is collected but not for all. If I compare the logs under .../netapp-harvest/logs of one system with and one system without current data, I see no relevant differences.
If I do a "netapp-manager -status", all pollers are running, if I do a "service harvest status", I see for all of my storages a line like this
|-984 /usr/bin/perl /opt/netapp-harvest/netapp-worker -poller xx-xxxSANxx01 -conf netapp-harvest.conf -confdir /opt/netapp-harvest -logdir /opt/netapp-harvest/log -daemon
but only for at least 9 of 39 systems (the last one defined in the netapp-harvest.conf, including the NetApp OCUM) the following line:
Jan 31 12:56:14 xx-xxxxxxxnew harvest: [STARTED] xx-xxxSANxx01 WHxxx
In all logfiles I can see, that for communication with the storages, ssl cert will be used (manually configured in the netapp-harvest.conf - the same cert as used in the old installation - but this was not the problem, because several system has current data).
Currently I copy the data from the old to the new system once more ... :-)
shut down the NABOX, create snapshot of the VM, start NABOX, delete all the counters by the option in admin-menu on CLI, then I'll have a look after the weekend, if there is current data for all the systems (at this point all systems have current data with the same harvest config file as before) and then I'll copy the historical data another time.