How to install Graphite and Grafana

by Extraordinary Contributor on ‎2015-09-08 06:39 AM

 

Introduction: This guide has basic installation steps for the Open Source software Graphite and Grafana. These software are commonly used with OnCommand Performance Manager (OPM) and/or NetApp Harvest available on the ToolChest.  Although you can search for and use Graphite and Grafana using instructions on the internet, this guide will provide a tested recipe on RHEL 6 & 7, and Ubuntu, to get you up and running fast.

 

Step 1:

Learn about the possible solution components Graphite, Grafana, OnCommand Performance Manager, and NetApp Harvest in Chapter 1.

 

Step 2:

Prepare for the installation by determining the hardware requirements, installing the base Linux operating system, and opening firewalls in Chapter 2.

 

Step 3:

Install Graphite and Grafana on Ubuntu using the steps in Chapter 3, or RHEL using the steps in Chapter 4.

 

Step 4:

Verify the installation was successful by executing some verification tests using the steps in Chapter 5.

 

Step 5:

If any issues were encountered, see Chapter 6 for some troubleshooting steps.

 

Comments

Sorry, back again;

 

At filers with Ontap 8.2 the options "options tls.enable on" worked.

With filers at Ontap 8.1 you can not enable tls. Try to find how it can. But did not find.

The NetApp-harvest is therefre (?) giving error:

[2016-02-22 16:07:57] [WARNING] [sysinfo] Update of system-info cache DOT Version failed with reason: in Zapi::invoke failed to connect SSL

 

So, how can I enable TLS at Ontap 8.1?

 

Regards, Maarten de Boer

 

Extraordinary Contributor

Hi @maartendeboer

 

I'm not sure of your exact error.  If SSL is running (443 is listening) then it could be the lack of TLS support is causing the failure to connect.  Unfortunately the SSL library errors are not very helpful and I can't do much in Harvest than just pass on the error the libraries return.  If this is TLS related, and you can't upgrade to a release that has TLS (it was in 8.1 sometime), then you can instead use the older less secure SSL.  To do it you must also load an older NetApp SDK that allows it.  Try sdk 5.3.  Get it from the normal software download page at the very bottom using a drop down box where you can choose the software and release.  After downloading unpack the Perl lib *pm files and replace them into the netapp-harvest/lib directory on your poller host.  So basically repeat the instructions from the harvest admin guide but with the older SDK and then restart your poller.


Cheers,

Chris

Hello Chris,

 

I've tried lib/perl/NetApp/* in netapp-manageability-sdk-5.2.zip and netapp-manageability-sdk-5.3.zip

Still no luck. See:

[2016-03-11 15:18:10] [WARNING] [sysinfo] Update of system-info cache DOT Version failed with reason: in Zapi::invoke failed to connect SSL

 

Do I need to change something in netapp-harvest.conf ?

 

Regards, Maarten de Boer

 

 

 

Extraordinary Contributor

Hi @maartendeboer

 

The error “in Zapi::invoke failed to connect SSL“  comes from the SDK:

 

      Net::SSLeay::connect($ssl) or return $self->fail_response(13001,

           "in Zapi::invoke failed to connect SSL $!")

 

From that SDK code there should be a reason “$!” but none is provided making it hard to know why it aborted.

 

I would check/do the following:

1) ssl should be setup using 'secureadmin setup ssl' if its not already done and should be enabled using 'secureadmin enable ssl'.  You could even try regenerating.

2) options httpd.admin.ssl.enable must be enabled using 'options httpd.admin.ssl.enable on'

 

If still no joy then maybe you can write some small script that uses Net::SSLeay with some additional verbose mode logged?  Or maybe take a packet trace during running to see if some other error is in captured in that traffic?  You could also use the openssl CLI to see if there are any differences in the ssl negotiation between working and non-working systems.

 

Cheers,

Chris

 

 

Hello Chris,

 

I've checked the SSL part. And see:

 

> secureadmin setup ssl
SSL Setup has already been done before. Do you want to proceed? [no]

 

> secureadmin status
ssh2 - active
ssh1 - inactive
ssl - active

 

httpd.admin.ssl.enable on

 

This looks OK, at filer level.

 

So if you have a clue?

 

Regards, Maarten de Boer

 

Extraordinary Contributor

Hi @maartendeboer

 

Sorry, I am out of ideas other than the ones I already mentioned.

 

Cheers,
Chris

Hello Chris & the rest,

 

I've received an advvise from another storage 3rd line Storage Engineer. He advised me to do;

Ontap> secureadmin setup ssl

 

And this worked. 

 

So with SDK 5.2 the SSL-connections to filers with Ontap8.1 do work after a new "secureadmin setup ssl".

 

Regard, Maarten de Boer

 

Frequent Contributor

Is it possible that the certificate expired and reconfiguring SSL regenerated a new certificate?

Grafana/graphite is good...

I use it from a year.

But il make me angry because i can't delete old volumes or Lun from graphite/graphana.

Where can i delete theses volumes or Luns. I ve many and many old snapdrive volumes to drop but no solution.

And it s' becomes to be difficult to admin.

Thanks...

 

Extraordinary Contributor

Hi @ferrant

 

Graphite doesn't have an API to remove [stale] metrics but you can remove them from the filesystem directly.  Here is an example shell command that will remove metrics files that are not updated in last 120 days, and the parent dir if it's empty.

 

If installed from source:

 

find /opt/graphite/storage/whisper -type f -mtime +120 -name \*.wsp -delete; find /opt/graphite/storage/whisper -depth -type d -empty -delete

 

or if using Ubuntu package:

find /var/lib/graphite/whisper -type f -mtime +120 -name \*.wsp -delete; find /var/lib/graphite/whisper -depth -type d -empty -delete

 

Change to something shorter if you want and then add to cron to run periodically and the stale data will get pruned away automatically.

 

You can also use Graphite blacklists to drop incoming metrics before they get created.  See here for the docs.  So you could add a blacklist for any known resources you never want to track.  I could add an exclude feature too if that would be helpful, so essentially a blacklist feature at the Harvest level.

 

Hope this helps!

 

Cheers,
Chris

Thanks.

This is a shame.Nevertheless I'd settle.
I'm going to use a particular name to each LUN / VOL temporarily for the script delete them.

 

Thinks about it: Option in Graphite to delete LUN/VOL when i delete it in my NetApp Storage.

 

Regards.

 

Benj.

 

Hi there

I have Grafana+Graphite+Havest running on several servers now.

I would like to have one view via one Grafana server, and I have managed to add the other servers as Data Sources and connecting to the remote graphite servers.

One issue which is a bit annoying is that I have to go into Data Sources and select "Default" on the source I would like to see.

I can then see the data from that source via the Dashboards...  If I would like to see another Data Source, I have to go back and make it the default source... 

Is there someway then can be avoided ?  I guess something has to be changed in the dashboards ?

 

I am able to create my own dashboard and Graph, where I can select my Panel data source...   But I cannot seem to change the NetApp Dashboards to point to a specific Data Source ?

 

Great solution anyhow, and sadly better than anything NetApp have been able to create ;-)

 

/Beardmann

Extraordinary Contributor

Hi @Beardmann

 

Each panel on a dashboard has a data source which can be set to use the default, to a user specified data source, or multiple and then set the data source for each metrics query.  So you can always create a new dashboard (maybe using 'save as' to clone) and then modify the data sources to the ones you want.

 

There is also the ability to do templated dashboards in Grafana 3.0b6 and newer: https://github.com/grafana/grafana/issues/816.  So with this new feature you could add a new template variable for datasources, and then in each panel set the datasource to be from the templated choice.  Scripting something to update the json files in the Harvest ./grafana directory, and then import those updated files would be the way to go.  Right now I don't plan to do this work but if you do and share I can include in Harvest, or if lots of others ask the same I could prioritize and add this myself.

 

Hope this helps!

Chris Madden

 

 

Hi again, and thanks for the reply...

 

I can indeed change each panels data source, but then the Group -> Cluster etc.. selections at the top of the Dashbard does not "match" the Panel, and seems to be pulled from the Default data source.

I can see that in the Graph -> Metrics, variables are used "$Group, $Cluster, $Node" etc..  Then an alias(Read), alias(Write), alias(Other) is used, which I guess is defined somewhere.   (this panel is the default node latency).  So one has to "drill" down and replace all the variables in order to produce one Dashboard with multi data sources ?...   Seems like a full days work to me :-)  

Is there another way to maybe "mix" all the Groups, Cluster, etc. together from multible data sources ?

 

/Heino

Extraordinary Contributor

Hi @Beardmann

 

To implement entire dashboards that use a dataource that you can pick from a dropdown you would goto the edit menu, templating, add a new variable to allow picking of datasource.  Then edit the other variables to use that new templated datasource variable.  Then go to each panel and set the datasource to be your templated datasource.  To do by hand would indeed be a LOT of effort.  But under the covers it's just some small JSON changes in each dashboard, so if you can figure out what they are you could manipulate the JSON directly via a script.  It would be interesting for me to do but I have other things higher priority right now, like shipping an updated harvest with Grafana 3.0 optimized views!

 

If you want to make dashboards with some panels with one datasource and other panels from another I think you would need to create/customize these manually according to your needs. 

 

Cheers,
Chris

 Just chiming in to say thanks for this solution. My boss wanted me to use Splunk to get all the stats I wanted, but we don't have the Splunk Netapp App right now (requires a Linux search head and we're using all Windows currently). I talked him into letting me try this solution, which is tricky in my environment because the Infosec people just recently started allowing RedHat in the environment (and no other Linux distro's) I'm not a Linux admin, which makes them nervous. Those Infosec guys...always so paranoid, haha!

 

Also wanted to summarize my setup experience so far, for anyone curious or bored.

 

I had to get this all up and running with our VM firewall blocking everything, because the VM has to pass a fairly rigorous security scan before Infosec will allow it to communicate on the network. I therefore followed the RHEL offline instructions.

 

I have never touched RedHat before and only dabbled with Linux 10+ years ago. I do have a few locked-down Linux VM's that I use for various things, but I can't really touch the OS on those VM's. Let's just say I'm a newb that only knows commands like ls, ll, cd, mv, cp. I'd never heard of WHL wheels or RPM's, and only briefly had heard of python (thanks to Minecraft) and Perl. I really love the tab-to-autocomplete feature. That's a life-saver for newbs like me.

 

I used:

RHEL 7.2

Graphite 0.9.13-pre1

Grafana 2.6.0-1 (decided against using 3.0.x for now)

Netapp Harvest 1.2.2

Latest version of all 10 python packages

Latest version of all the perl modules from CPAN (though I couldn't get Excel Writer or LWP-protocol-https to install from local repo)

 

Netapp FAS8060, 2 nodes in the cluster, running CDoT 8.3.x

 

I really wish I could copy and paste from local browser into the VM console window. Would have save LOADS of time. All the typing felt more Linuxy though.

 

 

The RedHat install went smoothly. I chose a few additional things like KDE and monitoring tools. Chose a standard security policy, curious how it will hold up to the security scan that Infosec is going to do.

 

I made an ISO file a few times to get files from my workstation to the VM. This worked pretty well. I was going to create a cert in our CA store, but couldn't figure out a way to get files off of the VM in its current isolated state. My VM isn't completely locked down by the firewall, as I am able to get DNS to resolve my NTP server, and I am able to get Harvest data through the Netapp API. The data from OCUM isn't making it through, though I can ping that server and it's on the same VLAN.

 

 

In the Quickstart Guide:

Section 4, the meaty section, went well without any issues. Clear instructions that didn't leave anything out. Well written, Mr. Madden! Only thing that slowed me down as in 4.3.3, step 3, restarting the httpd service - that was giving me errors. I rebooted the VM and that service started without error and has ran fine ever since.

Section 5.1 gave me issues because I didn't realize that those weren't apostrophes around the `date +%s` variable. I don't even know what that fake tick/apostrophe thing is, but it's on the tilde key and I've never used it before, haha. And again, there was no copy & paste available to me, so I was typing all these commands.

 

In the Harvest 1.2.2 Guide:

Section 2.3 - I put versions 6.04 and 6.06 of the LWP https tar.gz file in my local repo, but the install fails every time saying "Error downloading packages. No more mirrors to try. It shows that it's using the LocalRepo, and the files for both Mozilla-CA and LWP-protocol-https are in my LocalRepo, which is /media and is a mount for /dev/cdrom which is my ISO with all the files, but still fails. This repo worked great for the other tar.gz files, just not these two (attempting install for LWP-https triggers it to install Mozilla-CA I guess). Also, the Excel Writer has the same problem. I moved on with the setup, hoping these wouldn't hold me back too much.

Sections 3 - 8 were all good.

Section 8.2 - Importing the Grafana dashboards. This was trying to use that LWP https protocol and failing. Since I couldn't get the https protocol to install, I just switched Grafana to http on port 80, editted a couple config files, restarted Grafana, and did the import successfully. Then I switched it all back to https. Guess I don't need that https protocol for LWP afterall.

 

 

Once it started collecting live data, I have a few little things happening in Grafana.

 

The "not authorized" orange errors I'm going to assume have to do with our firewall blocking stuff. I also get a red "dashboard init failed - template variables could not be initialized: Unauthorized" error, which I believe is also a firewall issue since it's capacity data trying to come from OCUM. Will be glad to get the firewall rules set up, but am happy that I can get data from Harvest already.

 

I did manage to figure out how to fix some semi-broken graphs. The Cluster dashboard had a red (!) on 4 graphs, showing a "multiple series" error. Setting it to show 1 TopResource makes the error go away. I noticed other graphs elsewhere that had multiple series in them without error, and found that I just needed to add averageSeries() to the metric. Maybe something to do with Grafana 2.6 versus 2.5 or older versions.

 

Harvest-MultipleSeriesError.jpg


Editing Disk Utilization:

Harvest-DiskUtilFix.jpg


Fixed disk utilization:

Harvest-DiskUtilFixed.jpg

 

On the main Dashboards screen, it doesn't list the Network Port and Volume dashboards, probably because it can only display 10 dashboards I'm guessing. I starred these 2 so they at least show up on that Dashboards screen.

 

Sorry my summary is so long-winded. This always happens when I post on the internet.

 

Thanks Chris Madden for all your work on this!

 

Extraordinary Contributor

Hi @sssnake2332

 

Glad you got it all working!  The panels that don't load are indeed related to a change made in Grafana 2.5 and newer.  See here for how to fix.  I am also finishing up a new Harvest version which includes dashboard enhancements for features added in Grafana 3.0 and more performance counters so be on the lookout for it!

 

Cheers,

Chris

Hi,

 

I am very new to Netapp environment but have good experience working on Linux servers. I found this very interesting to setup an open sourced based real time monitoring and graphs creation tool like Graphite and Grafana.

 

I came across this great post very recently and decided to test it out in our environment,  I am using RHEL  7.1

 

Here is my problem:

 

I followed all steps as illustrated in quick guide but for some reason I am stuck with Graphite while Grafana is configured as expected. When I start apache webserver and try to access Graphite web login, it gives me Erro 403 - You don't have permission to access / on this server."

 

# ls -ld /opt/graphite
drwxr-xr-x 8 root root 4096 May 17 11:56 /opt/graphite

 

# ls -lrt /opt/graphite
total 28
drwxr-xr-- 4 root root 4096 May 16 02:43 lib
drwxr-xr-- 2 root root 4096 May 16 02:43 bin
drwxr-xr-- 2 root root 4096 May 16 02:43 examples
drwxr-xr-x 4 root root 4096 May 17 12:07 webapp
drwxr-xr-x 2 root root 4096 May 17 12:22 conf
drwxr-xr-- 6 apache apache 4096 May 17 12:27 storage
#

 

In error log file I am getting following:

 

[Tue May 17 12:27:44.776043 2016] [:error] [pid 7699] mod_wsgi (pid=7699): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Tue May 17 12:27:44.776098 2016] [:error] [pid 7699] mod_wsgi (pid=7699): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Tue May 17 12:27:44.776121 2016] [:error] [pid 7699] Traceback (most recent call last):
[Tue May 17 12:27:44.776143 2016] [:error] [pid 7699] File "/opt/graphite/conf/graphite.wsgi", line 4, in <module>
[Tue May 17 12:27:44.776196 2016] [:error] [pid 7699] from graphite.wsgi import application
[Tue May 17 12:27:44.776224 2016] [:error] [pid 7699] ImportError: No module named graphite.wsgi
[Tue May 17 12:27:44.782577 2016] [:error] [pid 7700] mod_wsgi (pid=7700): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Tue May 17 12:27:44.782694 2016] [:error] [pid 7700] mod_wsgi (pid=7700): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Tue May 17 12:27:44.782785 2016] [:error] [pid 7700] Traceback (most recent call last):
[Tue May 17 12:27:44.782888 2016] [:error] [pid 7700] File "/opt/graphite/conf/graphite.wsgi", line 4, in <module>
[Tue May 17 12:27:44.782991 2016] [:error] [pid 7700] from graphite.wsgi import application
[Tue May 17 12:27:44.783058 2016] [:error] [pid 7700] ImportError: No module named graphite.wsgi
[Tue May 17 12:27:44.783090 2016] [:error] [pid 7697] mod_wsgi (pid=7697): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Tue May 17 12:27:44.783120 2016] [:error] [pid 7697] mod_wsgi (pid=7697): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Tue May 17 12:27:44.783142 2016] [:error] [pid 7697] Traceback (most recent call last):
[Tue May 17 12:27:44.783167 2016] [:error] [pid 7697] File "/opt/graphite/conf/graphite.wsgi", line 4, in <module>
[Tue May 17 12:27:44.783223 2016] [:error] [pid 7697] from graphite.wsgi import application
[Tue May 17 12:27:44.783250 2016] [:error] [pid 7697] ImportError: No module named graphite.wsgi
[Tue May 17 12:27:44.788407 2016] [:error] [pid 7698] mod_wsgi (pid=7698): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Tue May 17 12:27:44.788438 2016] [:error] [pid 7698] mod_wsgi (pid=7698): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Tue May 17 12:27:44.788459 2016] [:error] [pid 7698] Traceback (most recent call last):
[Tue May 17 12:27:44.788483 2016] [:error] [pid 7698] File "/opt/graphite/conf/graphite.wsgi", line 4, in <module>
[Tue May 17 12:27:44.788536 2016] [:error] [pid 7698] from graphite.wsgi import application
[Tue May 17 12:27:44.788564 2016] [:error] [pid 7698] ImportError: No module named graphite.wsgi
[Tue May 17 12:27:44.789582 2016] [:error] [pid 7701] mod_wsgi (pid=7701): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Tue May 17 12:27:44.789662 2016] [:error] [pid 7701] mod_wsgi (pid=7701): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Tue May 17 12:27:44.789724 2016] [:error] [pid 7701] Traceback (most recent call last):
[Tue May 17 12:27:44.789787 2016] [:error] [pid 7701] File "/opt/graphite/conf/graphite.wsgi", line 4, in <module>
[Tue May 17 12:27:44.789920 2016] [:error] [pid 7701] from graphite.wsgi import application
[Tue May 17 12:27:44.789988 2016] [:error] [pid 7701] ImportError: No module named graphite.wsgi

 

 

Here is my virtual host conf file contents:

 

# cat /etc/httpd/conf.d/graphite-vhost.conf
# This needs to be in your server's config somewhere, probably
# the main httpd.conf
# NameVirtualHost *:80

# This line also needs to be in your server's config.
# LoadModule wsgi_module modules/mod_wsgi.so

# You need to manually edit this file to fit your needs.
# This configuration assumes the default installation prefix
# of /opt/graphite/, if you installed graphite somewhere else
# you will need to change all the occurances of /opt/graphite/
# in this file to your chosen install location.

<IfModule !wsgi_module.c>
LoadModule wsgi_module modules/mod_wsgi.so
</IfModule>

# XXX You need to set this up!
# Read http://code.google.com/p/modwsgi/wiki/ConfigurationDirectives#WSGISocketPrefix
WSGISocketPrefix run/wsgi

<VirtualHost *:81>
Header set Access-Control-Allow-Origin "*"
Header set Access-Control-Allow-Methods "GET, OPTIONS"
Header set Access-Control-Allow-Headers "origin, authorization, accept"
#
ServerName graphite
DocumentRoot "/opt/graphite/webapp"
ErrorLog /opt/graphite/storage/log/webapp/error.log
CustomLog /opt/graphite/storage/log/webapp/access.log common

# I've found that an equal number of processes & threads tends
# to show the best performance for Graphite (ymmv).
WSGIDaemonProcess graphite processes=5 threads=5 display-name='%{GROUP}' inactivity-timeout=120
WSGIProcessGroup graphite
WSGIApplicationGroup %{GLOBAL}
WSGIImportScript /opt/graphite/conf/graphite.wsgi process-group=graphite application-group=%{GLOBAL}

# XXX You will need to create this file! There is a graphite.wsgi.example
# file in this directory that you can safely use, just copy it to graphite.wgsi
WSGIScriptAlias / /opt/graphite/conf/graphite.wsgi

Alias /content/ /opt/graphite/webapp/content/
<Location "/content/">
SetHandler None
</Location>

# XXX In order for the django admin site media to work you
# must change @DJANGO_ROOT@ to be the path to your django
# installation, which is probably something like:
# /usr/lib/python2.6/site-packages/django
Alias /media/ "/usr/lib/python2.7/site-packages/django/contrib/admin/media/"
<Location "/media/">
SetHandler None
</Location>

# The graphite.wsgi file has to be accessible by apache. It won't
# be visible to clients because of the DocumentRoot though.

<Directory /opt/graphite/>
Options All
AllowOverride All
Require all granted
</Directory>

</VirtualHost>

#

 

 

I did lot of search and checks but I am back to square. No able to locate where is the issue.  I will highly thankful, If you can help based on the experiece you have with these tools.

 

Thanks in advance!

 

Deepak

I'm attempting to add a new SAN to our harvest setup.  I have two existing SAN systems up and working in harvest.

 

Currently I do not see how to add the new SAN fully to Graphite.  It appears under Metrics/netapp/capacity but not anywhere else like Metrics/netapp/perf or Metrics/netapp/poller.  Any tips welcome.  Thank you

Extraordinary Contributor

Hi @DeepakKumar

 

Weird, something must be different in your install.  Did you by chance choose a different version of Graphite?  What version of Python are you running (python -V)?  Sometimes the /opt/graphite/storage/log/webapp/error.log file has more verbose info.  Did you check it?  Maybe try and reinitialize the user db:

 

 

django-admin.py syncdb --pythonpath /opt/graphite/webapp --settings graphite.settings 

Hope this helps!

 

Cheers,

Chris

Extraordinary Contributor

Hi @rsr_72

 

Did you start the poller?  Check /opt/netapp-harvest/netapp-manager -status.  Do you see the new system in the list and is it running?  You can start all stopped pollers using /opt/netapp-harvest/netapp-manager -start.  If it's running then next place is to check the logs in /opt/netapp-harvest/log/<pollername>.log to see if there are any errors.  Last thing to check is if you have enough disk space on your graphite server (df -h).

 

Hope this helps!

 

Cheers,

Chris

Thanks for your reply, this seemed to be the fix.  The stats now show.  init.d]# /opt/graphite/bin/carbon-cache.py start

 

I have done something wrong, carbon-cache is not a service, only the manual start works.

 

 init.d]# service carbon-cache status
carbon-cache: unrecognized service

 

Any quick tip on turning that into a service?

 

 

Thanks Chris for your quick reply!

 

Here are the answers:

 

Did you by chance choose a different version of Graphite?  

No, I just follow the document line by line.

 

What version of Python are you running (python -V)?  

Python 2.7.5

 

Sometimes the /opt/graphite/storage/log/webapp/error.log file has more verbose info.  Did you check it?  

Here are the key messages I see:

 

[Wed May 18 02:32:49.729845 2016] [:error] [pid 4000] (13)Permission denied: mod_wsgi (pid=4000, process='graphite', application=''): Call to fopen() failed for '/opt/graphite/conf/graphite.wsgi'.
[Wed May 18 02:32:49.730828 2016] [:error] [pid 3997] (13)Permission denied: mod_wsgi (pid=3997, process='graphite', application=''): Call to fopen() failed for '/opt/graphite/conf/graphite.wsgi'.
[Wed May 18 02:34:03.256513 2016] [core:crit] [pid 4003] (13)Permission denied: [client 10.182.12.40:52913] AH00529: /opt/graphite/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readabl e and that '/opt/graphite/' is executable
[Wed May 18 02:35:17.046121 2016] [core:crit] [pid 4004] (13)Permission denied: [client 10.182.12.40:52939] AH00529: /opt/graphite/.htaccess pcfg_openfile: unable to check htaccess file, ensure it is readabl e and that '/opt/graphite/' is executable

 

04 2016] [mime_magic:error] [pid 4001] [client 10.182.12.40:60214] AH01512: mod_mime_magic: can't read `/opt/graphite/conf/graphite.wsgi'
[Wed May 18 02:51:38.089719 2016] [mime_magic:error] [pid 4001] [client 10.182.12.40:60214] AH01512: mod_mime_magic: can't read `/opt/graphite/conf/graphite.wsgi'
[Wed May 18 02:51:38.090040 2016] [:error] [pid 3996] (13)Permission denied: [remote 10.182.12.40:120] mod_wsgi (pid=3996, process='graphite', application=''): Call to fopen() failed for '/opt/graphite/conf/ graphite.wsgi'.
[Wed May 18 02:53:34.455444 2016] [mime_magic:error] [pid 4005] [client 10.182.12.40:58601] AH01512: mod_mime_magic: can't read `/opt/graphite/conf/graphite.wsgi'
[Wed May 18 02:53:34.455631 2016] [mime_magic:error] [pid 4005] [client 10.182.12.40:58601] AH01512: mod_mime_magic: can't read `/opt/graphite/conf/graphite.wsgi'
[Wed May 18 02:53:34.455959 2016] [:error] [pid 3997] (13)Permission denied: [remote 10.182.12.40:112] mod_wsgi (pid=3997, process='graphite', application=''): Call to fopen() failed for '/opt/graphite/conf/ graphite.wsgi'.
[Wed May 18 02:54:48.843567 2016] [mime_magic:error] [pid 4044] [client 10.182.12.40:58618] AH01512: mod_mime_magic: can't read `/opt/graphite/conf/graphite.wsgi'

 

After setting read permission it now show below messages

 

57 2016] [:error] [pid 15952] [remote 10.182.12.40:156] mod_wsgi (pid=15952): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Wed May 18 04:11:03.202605 2016] [:error] [pid 15952] [remote 10.182.12.40:156] mod_wsgi (pid=15952): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Wed May 18 04:11:03.202640 2016] [:error] [pid 15952] [remote 10.182.12.40:156] Traceback (most recent call last):
[Wed May 18 04:11:03.202685 2016] [:error] [pid 15952] [remote 10.182.12.40:156] File "/opt/graphite/conf/graphite.wsgi", line 5, in <module>
[Wed May 18 04:11:03.202769 2016] [:error] [pid 15952] [remote 10.182.12.40:156] import django
[Wed May 18 04:11:03.202800 2016] [:error] [pid 15952] [remote 10.182.12.40:156] ImportError: No module named django
[Wed May 18 04:11:06.500786 2016] [:error] [pid 15952] [remote 10.182.12.40:160] mod_wsgi (pid=15952): Target WSGI script '/opt/graphite/conf/graphite.wsgi' cannot be loaded as Python module.
[Wed May 18 04:11:06.500823 2016] [:error] [pid 15952] [remote 10.182.12.40:160] mod_wsgi (pid=15952): Exception occurred processing WSGI script '/opt/graphite/conf/graphite.wsgi'.
[Wed May 18 04:11:06.500849 2016] [:error] [pid 15952] [remote 10.182.12.40:160] Traceback (most recent call last):
[Wed May 18 04:11:06.500877 2016] [:error] [pid 15952] [remote 10.182.12.40:160] File "/opt/graphite/conf/graphite.wsgi", line 5, in <module>
[Wed May 18 04:11:06.500915 2016] [:error] [pid 15952] [remote 10.182.12.40:160] import django

 

 

 

Maybe try and reinitialize the user db:

 

django-admin.py syncdb --pythonpath /opt/graphite/webapp --settings graphite.settings 

==> Tried this step already but no luck. It runs without any issue.

 

# django-admin.py syncdb --pythonpath /opt/graphite/webapp --settings graphite.settings
Creating tables ...
Creating table account_profile
Creating table account_variable
Creating table account_view
Creating table account_window
Creating table account_mygraph
Creating table dashboard_dashboard_owners
Creating table dashboard_dashboard
Creating table events_event
Creating table url_shortener_link
Creating table auth_permission
Creating table auth_group_permissions
Creating table auth_group
Creating table auth_user_user_permissions
Creating table auth_user_groups
Creating table auth_user
Creating table django_session
Creating table django_admin_log
Creating table django_content_type
Creating table tagging_tag
Creating table tagging_taggeditem

You just installed Django's auth system, which means you don't have any superusers defined.
Would you like to create one now? (yes/no): yes
Username (leave blank to use 'root'): admin
E-mail address: deepak_kumar@company.com
Password:
Password (again):
Superuser created successfully.
Installing custom SQL ...
Installing indexes ...
Installed 0 object(s) from 0 fixture(s)
root@lctcve0058:/opt/graphite/webapp/graphite[root@lctcve0058 graphite]#

 

 

@madden

 

Looking forward to your new Harvest stuff. I'll go update to Grafana 3.02 now. My VM still hasn't got christened by Infosec to talk to the network, haha, so I have to create and mount a new ISO to get files to the VM.

 

I'm in the process of setting up something of a NOC (Network Operations Center) with a couple projectors and 100" screens. Grafana is going to get at least a quadrant or a third of one screen, in slideshow mode. I could not accomplish this with OC PM & UM, I'm sure, so I'm very grateful.

Extraordinary Contributor

Hi @DeepakKumar

 

After running the django-admin command did you reset permissions again so apache can read?

# chown -R apache:apache /opt/graphite/storage 

 

And also bounce apache twice  (on first start up after modifying users a sqlite db will be created):

# service httpd restart;sleep 15; service httpd restart 

 

If this still doesn't work I would probably just re-install from scratch because I honestly think some step got missed, or some command pasted incorrectly, and troubleshooting will cost you more time then a fresh install.

 

Cheers,
Chris

I ran a Nessus scan on this VM and wanted to share a couple things that I did to harden security up to Nessus' standards.

 

  • SSH weak ciphers allowed - Added Ciphers aes128-ctr,aes192-ctr,aes256-ctr to /etc/ssh/sshd_config, which omits the vulnerable ciphers
  • SSH weak MACs allowed - Added MACs hmac-sha1,hmac-ripemd160 to /etc/ssh/sshd_config, which omits the vulnerable MACs
  • SSL cert - Self-signed isn't good enough, need to generate one from your CA if you have one. Be sure to make it 2048 bits or higher, and include the hostname in either the common name or the alt subject name. I used this page for a guide to exporting the CRT and KEY from a PFX. Use the decrypted key in your grafana.ini, not the encrypted key.
  • HTTP trace or track allowed - Added TraceEnable off to /etc/httpd/conf/httpd.conf to remediate the vulnerability
  • (optional) IPv6 - Nessus didn't flag this as a vulnerability, but our Infosec guys want it disabled

These changes fixed the 4 or 5 vulnerabilities that I could address. There are still 6 criticals, 8 highs, and 19 mediums, but they are all resolved by RedHat updates that I am still unable to download becuse Infosec is trying to decide if they should open the 3 URL's for public RHEL updates, or build up a local repo satellite update server or whatever it's called (RHEL's version of WSUS that I haven't looked into yet).

 

 

 

Also wanted to share this command, if it hasn't been shared somewhere already, for resetting retention values in the Whisper files:

 

find ./ -type f -name '*.wsp' -exec whisper-resize.py --nobackup {} 60s:30d 5m:180d 1h:3y \;

 

That command will fix all .wsp files in the folders beneath where you run it. Might take awhile since there are probably lots of files, especially in the Netapp perf folder.

I have a problem that hopefully someone can help with.

 

The netapp-harvest service isn't starting on boot automatically. I start it manually and it works fine.

I did the chkconfig command and netapp-harvest shows up. I don't know what I'm doing, but I changed the run levels 0 and 1 to "on", but that didn't do anything. Level 6 is still "off".

 

In the log file /var/log/messages, I found this:

 

Harvest-service-not-starting.jpg

 

The /etc/rc.d/init.d/netapp-harvest file:

Harvest-init.d.jpg

 

The log files under /opt/netapp-harvest/log  don't show any info prior to me starting the service manually.

 

I should probably change the run levels back for 0 and 1 to "no".

I ran a Nessus scan on this VM and wanted to share a couple things that I did to harden security up to Nessus' standards.

 

  • SSH weak ciphers allowed - Added Ciphers aes128-ctr,aes192-ctr,aes256-ctr to /etc/ssh/sshd_config, which omits the vulnerable ciphers
  • SSH weak MACs allowed - Added MACs hmac-sha1,hmac-ripemd160 to /etc/ssh/sshd_config, which omits the vulnerable MACs
  • SSL cert - Self-signed isn't good enough, need to generate one from your CA if you have one. Be sure to make it 2048 bits or higher, and include the hostname in either the common name or the alt subject name. I used this page for a guide to exporting the CRT and KEY from a PFX. Use the decrypted key in your grafana.ini, not the encrypted key.
  • HTTP trace or track allowed - Added TraceEnable off to /etc/httpd/conf/httpd.conf to remediate the vulnerability
  • (optional) IPv6 - Nessus didn't flag this as a vulnerability, but our Infosec guys want it disabled

These changes fixed the 4 or 5 vulnerabilities that I could address. There are still 6 criticals, 8 highs, and 19 mediums, but they are all resolved by RedHat updates that I am still unable to download becuse Infosec is trying to decide if they should open the 3 URL's for public RHEL updates, or build up a local repo satellite update server or whatever it's called (RHEL's version of WSUS that I haven't looked into yet).

 

 

 

Also wanted to share this command, if it hasn't been shared somewhere already, for resetting retention values in the Whisper files:

 

find ./ -type f -name '*.wsp' -exec whisper-resize.py --nobackup {} 60s:30d 5m:180d 1h:3y \;

 

 

That command will fix all .wsp files in the folders beneath where you run it. Might take awhile since there are probably lots of files, especially in the Netapp perf folder.

Extraordinary Contributor

Hi @sssnake2332

 

 

Can you add the line below as the first line in the /etc/init.d/netapp-harvest file and see if it autostarts on reboot?

#!/bin/bash

Another user reported this was required on RH but I don't have a system where I can verify it.

 

Thanks!

Chris

@madden

 

Thanks Chris, that fixed it for me. It autostarts now, yay!

 

 

 

Also, just wanted to post a few things I had to do to address some detected security vulnerabilities from a Nessus scan.

 

  • SSH weak ciphers allowed - Added Ciphers aes128-ctr,aes192-ctr,aes256-ctr to /etc/ssh/sshd_config, which omits the vulnerable ciphers
  • SSH weak MACs allowed - Added MACs hmac-sha1,hmac-ripemd160 to /etc/ssh/sshd_config, which omits the vulnerable MACs
  • SSL cert - Self-signed isn't good enough, need to generate one from your CA if you have one. Be sure to make it 2048 bits or higher, and include the hostname in either the common name or the alt subject name. I used a guide that I tried to hyperlink here that shows exporting the CRT and KEY from a PFX, but this post keeps auto-deleting and I suspect it is due to the hyperlink, so I am leaving it out. Use the decrypted key in your grafana.ini, not the encrypted key.
  • HTTP trace or track allowed - Added TraceEnable off to /etc/httpd/conf/httpd.conf to remediate the vulnerability
  • (optional) IPv6 - Nessus didn't flag this as a vulnerability, but our Infosec guys want it disabled

These changes fixed the 4 or 5 vulnerabilities that I could address. There are still 6 criticals, 8 highs, and 19 mediums, but they are all resolved by RedHat updates that I am still unable to download becuse Infosec is trying to decide if they should open the 3 URL's for public RHEL updates, or build up a local repo satellite update server or whatever it's called (RHEL's version of WSUS that I haven't looked into yet).

 

 

 

Also wanted to share this command, if it hasn't been shared somewhere already, for resetting retention values in the Whisper files:

 

find ./ -type f -name '*.wsp' -exec whisper-resize.py --nobackup {} 60s:30d 5m:180d 1h:3y \;

 That command will fix all .wsp files in the folders beneath where you run it. Might take awhile since there are probably lots of files, especially in the Netapp perf folder.

Hi Chris,

 

 

I finally managed to find the real cause of all this permissions issue in my case Smiley Very Happy. It turned out that our Linux Golden Image was having the default umask 0077 and it caused lot of change in file permissions. 

 

 

Last night, I had started all from scratch and set the umask to 0022, and as expected it worked for me, all the steps worked like a charm and no more issues.

 

 

I will try to explore the Harvest tool as my next step and integrate it with some of my CDOT and 7-Mode Filers, please share with me if you have any good reference to start with.

 

 

Thanks a lot for all your time and suggestions, appreciates it!

 

 

 

Regards,

Deepak

 

 

Hi All,

 

 

I have a POC system up and runing with Graphite and Grafana, added one CDOT cluster and few lab filers in our environment. I need some help from experts here on following:

 

  1. While doing testing I have keyed in different names for filer names in "/opt/netapp-harvest/netapp-harvest.conf" and each time I started the tests, Grafana node lists got populated with all those names and how it is like lot of duplicates and unwanted names in the list. Is there any quick way to perform cleanup so that I can only see the latest names as per harvest.conf file.
  2. Can someone please share how you configured the netapp-harvest.conf for using OnCommand Unified Manager (OCUM) to pull the data for a 7-Mode and CDOT Filers

      

Here is the sample output:

 

#====== Polled host setup defaults ============================================
host_type = FILER
host_port = 443
host_enabled = 1
template = default
data_update_freq = 60
ntap_autosupport = 0
latency_io_reqd = 10
auth_type = password
username = netapp-harvest
password = XXXXXXX
ssl_cert = INSERT_PEM_FILE_NAME_HERE
ssl_key = INSERT_KEY_FILE_NAME_HERE


##
## Monitored host examples - Use one section like the below for each monitored host
##

#====== 7DOT (node) or cDOT (cluster LIF) for performance info ================
#
[lab001a]
hostname = <IP Address>
site = US
[lab001b]
hostname = <IP Address>
site = US

##
[CDOTLab]
hostname = cdotlab001
site = US

#====== OnCommand Unified Manager (OCUM) for cDOT capacity info ===============
#
[OCUMHost]
hostname = <IP Address>
site = US
host_type = OCUM
data_update_freq = 900
normalized_xfer = gb_per_sec

 

 

Thanks in advance!

 

Deepak

 

Extraordinary Contributor

Hi @DeepakKumar

 

To remove unwanted entries from Grafana you simply need to remove the underlying metric files from the Graphite server.  So login the Graphite server, cd into the data directory (typically /var/lib/graphite/whisper or /opt/graphite/storage/whisper), and you will see netapp and netapp7.  If you cd into netapp you will see the sites for your clusters, cd into the site name and you see the clusters.  Same for 7-mode but the hierarchy begins with netapp7.  Simply delete the files/directories to clean them up.

 

For an example and more discussion on the OCUM integration, which is for OCUM 6.x only (cDOT only), check this other post

 

Cheers,
Chris Madden

Storage Architect, NetApp EMEA (and author of Harvest)

Blog: It all begins with data

 

If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO or both!

 

Hi Chris,

 

Thanks a ton!

 

I managed to do the cleanup as you guided and it all good now Smiley Very Happy

 

I just checked my OCUM and it is not yet version 6 so I will check on this further.

 

 

Regards,

Deepak

 

 

 

Hi Chris,

 

do you have a timeline, when a new version of harvest and the dashboards will be released?

I have this document for Graphite/Grafana installation - Graphite_Grafana_Quick_Start_v1.4.pdf.

Is this the current one or does a newer version exists?

 

Best Regards,

Klaus

Extraordinary Contributor

Hi @klmi

 

The Quick start 1.4 guide, and Harvest 1.2.2 are the latest.  I am working on the Harvest update and can say it's nearly done.  I expect within a month it will be posted to the toolchest.

 

Cheers,
Chris Madden

Hi all,

 

I patched the server and seemed to have lost carbon-cache.  Any ideas, tips?  Thank you!

 

]# /opt/graphite/bin/carbon-cache.py start
Traceback (most recent call last):
  File "/usr/lib64/python2.6/site-packages/twisted/python/usage.py", line 373, in <lambda>
    fn = lambda name, value, m=method: m(value)
  File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 566, in opt_reactor
    installReactor(shortName)
  File "/usr/lib64/python2.6/site-packages/twisted/application/reactors.py", line 79, in installReactor
    for installer in getReactorTypes():
  File "/usr/lib64/python2.6/site-packages/twisted/plugin.py", line 200, in getPlugins
    allDropins = getCache(package)
--- <exception caught here> ---
  File "/usr/lib64/python2.6/site-packages/twisted/plugin.py", line 165, in getCache
    provider = pluginModule.load()
  File "/usr/lib64/python2.6/site-packages/twisted/python/modules.py", line 380, in load
    return self.pathEntry.pythonPath.moduleLoader(self.name)
  File "/usr/lib64/python2.6/site-packages/twisted/python/reflect.py", line 456, in namedAny
    topLevelPackage = _importAndCheckStack(trialname)
  File "/usr/lib64/python2.6/site-packages/twisted/plugins/twisted_lore.py", line 4, in <module>
    from twisted.lore.scripts.lore import IProcessor
  File "/usr/lib64/python2.6/site-packages/twisted/lore/scripts/lore.py", line 8, in <module>
    from twisted.lore import process, indexer, numberer, htmlbook
  File "/usr/lib64/python2.6/site-packages/twisted/lore/process.py", line 7, in <module>
    import tree #todo: get rid of this later
  File "/usr/lib64/python2.6/site-packages/twisted/lore/tree.py", line 15, in <module>
    from twisted.web import domhelpers
  File "/usr/lib64/python2.6/site-packages/twisted/web/__init__.py", line 14, in <module>
    from twisted.python.deprecate import deprecatedModuleAttribute
exceptions.ImportError: cannot import name deprecatedModuleAttribute
Traceback (most recent call last):
  File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 669, in parseOptions
    usage.Options.parseOptions(self, options)
  File "/usr/lib64/python2.6/site-packages/twisted/python/usage.py", line 226, in parseOptions
    for (cmd, short, parser, doc) in self.subCommands:
  File "/usr/lib64/python2.6/site-packages/twisted/application/app.py", line 679, in subCommands
    for plug in plugins:
  File "/usr/lib64/python2.6/site-packages/twisted/plugin.py", line 200, in getPlugins
    allDropins = getCache(package)
--- <exception caught here> ---
  File "/usr/lib64/python2.6/site-packages/twisted/plugin.py", line 165, in getCache
    provider = pluginModule.load()
  File "/usr/lib64/python2.6/site-packages/twisted/python/modules.py", line 380, in load
    return self.pathEntry.pythonPath.moduleLoader(self.name)
  File "/usr/lib64/python2.6/site-packages/twisted/python/reflect.py", line 456, in namedAny
    topLevelPackage = _importAndCheckStack(trialname)
  File "/usr/lib64/python2.6/site-packages/twisted/plugins/twisted_lore.py", line 4, in <module>
    from twisted.lore.scripts.lore import IProcessor
  File "/usr/lib64/python2.6/site-packages/twisted/lore/scripts/lore.py", line 8, in <module>
    from twisted.lore import process, indexer, numberer, htmlbook
  File "/usr/lib64/python2.6/site-packages/twisted/lore/process.py", line 7, in <module>
    import tree #todo: get rid of this later
  File "/usr/lib64/python2.6/site-packages/twisted/lore/tree.py", line 15, in <module>
    from twisted.web import domhelpers
  File "/usr/lib64/python2.6/site-packages/twisted/web/__init__.py", line 14, in <module>
    from twisted.python.deprecate import deprecatedModuleAttribute
exceptions.ImportError: cannot import name deprecatedModuleAttribute

 

Capture.JPG

 

 

Frequent Contributor

How did you install Graphite?

It has been a while...  Followed the build document as closely as I could

 

 Steps directly from (graphite section)

NetApp

Quick Start:

Installing Graphite and Grafana

Christopher Madden, NetApp

 NetApp 7 September, 2015

 

 

661  cd /opt/netapp-harvest/
  662  ls
  663  cd netapp-manager
  664  pwd
  665  ps -ef |grep netapp
  666  cd /opt
  667  cd netapp-harvest/
  668  ls
  669  cat netapp-harvest.conf
  670  vi netapp-harvest.conf
  671  service netapp-harvest status
  672  grep restart history
  673  history
  674  service netapp-harvest restart
  675  cd /etc/grafana
  676  ls
  677  cd /opt/netapp-harvest/
  678  ls
  679  cd grafana
  680  ls
  681  shutdown -r now
  682  df -h
  683  service netapp-harvest status
  684  pwd
  685  cd /opt/netapp-harvest/
  686  ls
  687  vi netapp-harvest.conf
  688  cd /etc/carbon
  689  cd /opt/graphite
  690  ls
  691  cd conf
  692  ls
  693  pwd
  694  vi storage-schemas.conf
  695  cd /opt/netapp-harvest/

 

Thank you

Our install is working, not sure why it went down for a while.

 

Capture.JPG

Thank you, it works. But anybody used prometheus instead of graphite?There are ready rules?

Extraordinary Contributor

Hi @avinchakov

 

There are numerious time series databases (TSDB) out there.  If Prometheus can accept Graphite metrics (I think it can from this)  then you can certainly get Harvest sourced metrics into it.  Grafana also supports Prometheus so you can use it to view your metrics.  The biggest downside will be that you get no default dashboards since these assume/rely on Graphite as your TSDB.

 

Cheers,
Chris Madden

I would like to exclude the graphite server. I saw that someone had asked about other than graphite and someone has already done

Does any support for Ontap cDOT version 9 exist for harvest yet?     

 

netapp-harvest 1.2.2 does not include a tempate for cOT v9 and won't start.

 

 

1.3X2 beta supports ontap9
Frequent Contributor
1.3 isn't available yet for download but should be shortly.

Tip to use Graphite/Grafana on 7 Mode.

 

Somebody had a lot of issue to add 7 Mode controller. We've discored a simple way. It's only a TLS/SSL issue.

 

You have to regenerate the SSL keys with setupssl cli process. Then choose to generate a 2048 bytes certificate. The default one does not work,

 

 
 
 

We've run into the ONTAP 9.0 issue too...... 

 

I had a look in the logs (located in /opt/netapp-harvest/log/  in our install on RHEL) and found the following entries 

 

[2016-10-03 10:40:46] [NORMAL ] [main] Collection of system info from [<cluster-name>] running [NetApp Release 9.0] successful.
[2016-10-03 10:40:46] [ERROR  ] [main] No best-fit collection template found (same generation and major release, minor same or less) found in [/opt/netapp-harvest/template/default].  Exiting;

Having a look in /opt/netapp-harvest/template/default  I found a number of version specific conf files, but NOT one for V 9.0 - so I copied cdot-8.3.0.conf to cdot-9.0.0.conf which gives us the common counters.

 

The logs *do* have a lot of errors for 

[2016-10-24 15:16:06] [WARNING] No counter metadata found for: [nic_common][tx_bytes_per_sec]; check if valid counter for this DOT release
[2016-10-24 15:16:06] [WARNING] No counter metadata found for: [nic_common][rx_bytes_per_sec]; check if valid counter for this DOT release

but that seems to be the only ones.  

 

If they're the only things that aren't coming through, we can live with that until the next version of HArvest is released Smiley Very Happy

 

The rest of the counters seem to be where we're expecting them.

 

Ontap 9 is not supported yet. See above comments

Hi Chris,

is Harvest/Gafana able to ingest historical perfstat or stats archive type data and output the charts for the individual disk, network, LIF etc? The dashboards seem so "feature rich" it sems as though you'd get so much more than latx or cmpg. Sorry if you've been asked this question before.

Thanks.

Bello

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.