Subscribe
Accepted Solution

harvest issue

recently insttaled  harvest  based on the document  toolchest. however during importing dashbaord it fails

i get following error, seems like some permission error.

 

grafana is configured on port 443 wth self signef  certificcaite. not sure which permission is causing the issue.

 

appericate your support, thanks

 

 /opt/netapp-harvest/netapp-manager -import
[OK     ] Will import dashboards to [https://localhost:443]
[OK     ] Dashboard directory is [/opt/netapp-harvest/grafana]
[ERROR  ] Failed to import dashboard [db_netapp-dashboard-cluster-group.json] due to error: 403 Forbidden
[ERROR  ] -Response was :{"message":"Permission denied"}
[ERROR  ] Exiting due to fatal error.

Re: harvest issue

Hi @babukish11

 

I don't think this is SSL related but rather an invalid API key.  I would generate another API key and paste that new value (including any '=' symbols at the end) into the netapp-harvest.conf file.  Then try the import again.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

Re: harvest issue

thanks for reply,

here is the key I got from grafana, even I delete and create it just create the same key

however I tired useing fist one, then it does not complain the key, then used second one( pu it in the quote) . it does not like the key.

 

even I tried to change to http, still no luck

 

#grafana_api_key = eyJrIjoiVzNwcFFLdWJXUzM0YUpUaUFKZWh2c01qN29MNU9rNXAiLCJuIjoibmV0YXBwLWhhcnZlc3QiLCJpZCI6MX0=
grafana_api_key = 'eyJrIjoiUjA0ZGdta29rZzdzNTlFMDFIQzlvQUVBcWhra3lQNEsiLCJuIjoibmV0YXBwLWhhcnZlc3QiLCJpZCI6MX0='

 

 

 /opt/netapp-harvest/netapp-manager -import
[OK     ] Will import dashboards to [http://localhost:3000]
[OK     ] Dashboard directory is [/opt/netapp-harvest/grafana]
[ERROR  ] Failed to import dashboard [db_netapp-dashboard-cluster-group.json] due to error: 403 Forbidden
[ERROR  ] -Response was :{"message":"Permission denied"}
[ERROR  ] Exiting due to fatal error.
-----
 /opt/netapp-harvest/netapp-manager -import
[OK     ] Will import dashboards to [http://localhost:3000]
[OK     ] Dashboard directory is [/opt/netapp-harvest/grafana]
[ERROR  ] Failed to import dashboard [db_netapp-dashboard-cluster-group.json] due to error: 401 Unauthorized
[ERROR  ] -Response was :{"message":"Invalid API key"}
[ERROR  ] Exiting due to fatal error.

 

 

thanks

Re: harvest issue

Hi @babukish11

 

 

The netapp-harvest.conf file format is always key = value.  The key and value are taken exactly as they are, so no quotes, unless you want your password to have a quote in it!  The commented out entry looks like a 'normal' entry that could work.

 

When you created the API key did you set the role to 'editor' or greater in the Grafana API Key add GUI?

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

 

Re: harvest issue

Thank you, that worked


Re: harvest issue

Hello, i'm running Harvest now for 1.5 years with a lot of yoy. but my disk is full now so i cannot see any dashboard and get error after login "the page isn't redirecting properly"

if i do putty to server and do some basic checks i see this :

 

root@NL010VNxxxx:~# df -hT
Filesystem                       Type      Size  Used Avail Use% Mounted on
/dev/mapper/NL010VN0530--vg-root ext4       26G   25G     0 100% /
none                             tmpfs     4.0K     0  4.0K   0% /sys/fs/cgroup
udev                             devtmpfs  2.0G  4.0K  2.0G   1% /dev
tmpfs                            tmpfs     396M  680K  395M   1% /run
none                             tmpfs     5.0M     0  5.0M   0% /run/lock
none                             tmpfs     2.0G     0  2.0G   0% /run/shm
none                             tmpfs     100M     0  100M   0% /run/user
/dev/sda1                        ext2      236M   38M  186M  17% /boot
overflow                         tmpfs     1.0M     0  1.0M   0% /tmp


root@NL010VNxxxx:~# lsblk
NAME                              MAJ:MIN RM   SIZE RO TYPE MOUNTPOINT
sda                                 8:0    0    60G  0 disk
├─sda1                              8:1    0   243M  0 part /boot
├─sda2                              8:2    0     1K  0 part
└─sda5                              8:5    0  29.8G  0 part
  ├─NL010VNxxxx--vg-root (dm-0)   252:0    0  25.8G  0 lvm  /
  └─NL010VNxxxx--vg-swap_1 (dm-1) 252:1    0     4G  0 lvm  [SWAP]
sr0                                11:0    1  1024M  0 rom

 

my disk sda is 60Gb and half full so it seems.

 

as i'm a newby on linux i'm not familiar with sda's etc etc.

 

my direct question here is : what are the syntax for extending the sda5 or even the VG to that 60Gb disk ??

 

I'm running on a virtual within VMware ESXi6

 

Thanks.

Re: harvest issue

Hi @squirrel

 

 

Your issue is the / (root) filesystem is full:

 

/dev/mapper/NL010VN0530--vg-root ext4       26G   25G     0 100% /

 

 

Assuming the space is going to Graphite data you could just try to delete old/stale files first.  I have this snippet going into the next Harvest user guide which might be helpful:

 

================================

6.4 Purging inactive metrics from Graphite [optional]


Graphite does not have an API to purge inactive metrics. As a consequence, if instances on the cluster
(luns, volumes, lifs, etc) are deleted the associated metrics will not leading to stale metrics that clutter the
UI forever. As a housekeeping practice many Graphite administrators configure a cronjob that purges
inactive metrics files and parent directories if they become empty.

 

The following steps can be used to setup the purging of inactive metrics files:
1. Login the graphite host and if not logged in as the root superuser, become root using sudo:
[user@host ~]# sudo -i

 

2. Add a crontab entry. The following syntax will purge metrics with 120 days of inactivity, and any
empty directories, every Sunday at 00:30:
[root@host ~]# crontab -e

If using an installation installed from Source:

30 0 * * 7 find /opt/graphite/storage/whisper -type f -mtime +120 -name \*.wsp -delete; find /opt/graphite/storage/whisper -depth -type d -empty -delete

or if using Ubuntu package:

30 0 * * 7 find /var/lib/graphite/whisper -type f -mtime +120 -name \*.wsp -delete; find /var/lib/graphite/whisper -depth -type d -empty -delete

================================

 

 

For a one time action just become root (sudo -i) and then run the relevant "find ..." command above.

 

For help extending filesystems maybe someone else can help, or you could try stackoverflow.com which has a more broad user base.

 

Cheers,
Chris Madden

Solution Architect - 3rd Platform - Systems Engineering NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!