Active IQ Unified Manager Discussions

Customer Performance Database is huge

emanuel
3,769 Views

Is there a way to purge or trim DFM databases?

It is taking over three hours to back up and it is 77GB in size.

7 REPLIES 7

emanuel
3,770 Views

During an upgrade today, the warning came out that said the upgrade can take hours with a larger database.

curiousity is  ... what defines a larger database.  Ours is 4.7GB in size, our performance directory is a total of 77 GB

smoot
3,769 Views

I think what you're asking for is to reduce the size of the performance advisor data. I do not know how to do that.

I wouldn't worry about the database file itself (the 4.7 GB one). Almost certainly most of the three hours is spent with the PA files. But if you want to try it, use "dfm database reload". This will also take a substantial amount of time because it dumps the entire database contents into a text file, then reloads it. I don't know how fast the DFM server is but I'd plan on a few hours rather than a few minutes. You could restore a DFM backup on a test system to try it out.

-- Pete

emanuel
3,769 Views

Thats the command; i remember hearing about it for Juniper's OM/PM work

Here is our upgrade stats:

16 GB RAM

Dual Xeons; Red Hat.

26 minutes to upgrade from 3.7.x to 3.8.1

also here is something curious that happened on the file system for the DFM host; the monitor.dbR and .logR files were removed.  What are these?

before:

qct-dfm-sdc{506}$ ls –l /dfm/data

total 8883848

-rw------- 1 root bin 5002133504 May 20 15:19 monitordb.db

-rw------- 1 root bin 4046815232 May 20 15:23 monitordb.dbR

-rw------- 1 root bin   38862848 May 20 15:18 monitordb.log

-rw-r--r-- 1 root bin     327680 May 20 15:23 monitordb.logR

qct-dfm-sdc{507}$ du -sh *

4.7G    monitordb.db

3.8G    monitordb.dbR

38M     monitordb.log

328K    monitordb.logR

qct-dfm-sdc{508}$

after:

qct-dfm-sdc{552}$ pwd

/dfm/data

qct-dfm-sdc{553}$ ls -l

total 4830092

-rw------- 1 root bin 4926939136 May 20 15:53 monitordb.db

-rw------- 1 root bin   14221312 May 20 15:53 monitordb.log

qct-dfm-sdc{554}$

sanjyoth
3,769 Views

The reloading happens in 3 steps according to sybase interfaces:

1) Create a new database with the same settings as the old database

2) Reload it

3) Replace the old database.

When the new database is created in step 1, the dbspace file names have an R  appended to the file name to prevent file name conflicts if the dbspace file for the new database is created in the same  directory as the dbspace for the original database. Thats why you are seeing monitordb.dbR and monitordb.logR files. These will be replaced with monitordb.db and monitordb.log files accordingly in step 3.

Regards

Sanjyoth

emanuel
3,769 Views

okay ... works for me.

as form putting the DB on a diet .. we will hold off since the update on DFM software only took 26 minutes.

now ... would the command help reduce the footprint on the collected performance data?  At some point it is going to overrun the available storage parition it is in ... what is the best way to mitigate this issue?  example:  if i have two years of perf data can i shave off the last year?

amiller_1
3,770 Views

This is very nicely handled in Operations Manager 4.0/Performance Advisor/NetApp Management Console 3.0 (may be in earlier versions but I'm not sure).

Login to the NetApp Management Console --> Performance Advisor --> "Set Up" button --> Hosts --> click on the Host --> choose "Data Collection" tab. That shows you each counter, the retention period, the current used space, projected space, etc. You can then adjust the retention periods for any of the counters (or just stop gathering certain counters)....quite nicely done overall.

reide
3,770 Views

Ask your customer if they truly need to keep 2 years worth of performance data on-line within Performance Advisor.  If not, consider exporting the historical performance data to CSV flat-files, and only keeping 3-6 months worth of performance data on the PA server.  You can also configure DFM to auto-export performance data automatically on a schedule. This would allow you to keep performance data historically for years without clogging-up your perfdata folder.  It would also make your DFM database backups much smaller and faster.

http://communities.netapp.com/docs/DOC-1218

Public