Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
Hello all
I have a couple of customers that are concerned with growing data in DFM, is there a way to limit the retention of data in DFM?
"I only want to keep one year's worth."
Turn on your job purge.
Reduce your eventspurgeinterval
Regards
I found eventspurgeinterval under dfm options list and its set to 25.71 weeks
and i found dfm purgehistory command but it looks very surgical and i am not sure if i want the expose this to a customer; is this the jobs purge you were talking about?
Hi Emanuel,
The main usecase or intent of purge history is for purging some spikes in the history graphs so that your trending is fine.
I was talking about the option below.
[root@lnx186-118 ~]# dfbm option list
Option Value
---------------------------- ------------------------------
backupDirMonInterval 8 hours
backupScriptRunAs
discoverNdmp Enabled
ndmpMonInterval 30 minutes
purgeJobsOlderThan off
snapvaultLagErrorThreshold 2 days, 0:00
snapvaultLagWarningThreshold 1 day, 12:00
[root@lnx186-118 ~]#
this will purge all dfbm, dfdrm and dfpm jobs older than the days specified.
This will reduce your db size only if Backup/Disaster Recover/Protection Manager is used.
Else this will be on no help.
Regards
adai
okay so it looks like there is no universal retention setting to control the over all size of a DFM database ( not a GB size per say but a dated size ).
I have been advising my customers to "smartly" provide sufficent storage for DFM to have available. In one case, the local volume was full and DFM stopped collecting new information.
There is no way to control the over size of the db, there is way to monitor the space, using the below event.
dfm eventtype list | grep -i "dfm.free.space"
management-station:enough-free-space Normal dfm.free.space
management-station:filesystem-filesize-limit-reached Error dfm.free.space
management-station:not-enough-free-space Error dfm.free.space
Regards
adai
Hello Adai, all
I have a couple of clients that are going to start getting aggressive with triming data; although the monitor.db / log files cannot be changed in size; they are looking at the performance directory and are looking to purge any files / directories over one to two years old. I have been advising against it but i understand their concerns as the perf directory seems to be growing at the rate of 10 GB a month and is pushing against a hard capacity limit on the server.
is there a way to control the size of the perf directory?
Yes we have chatted about moving to a iscsi lun but even that has a limit and even with compression and dedup possibilities ( even all the magic in the world cannot grow 173 GB parition any larger ). The size of their current DFM install is around 120GB right now and in six months i expect the 173 GB disk to be filled. We have reduced the amount of DFM backups to once a day, we have evaluated each monitored controller to see if we really need perf stats collected ... yet the install base grows and grows ( slowly ).
will new versions of DFM ( on command, etc ) will introduce hard caps on database growth?
thank you, emanuel
I have a couple of clients that are going to start getting aggressive with triming data; although the monitor.db / log files cannot be changed in size; they are looking at the performance directory and are looking to purge any files / directories over one to two years old. I have been advising against it but i understand their concerns as the perf directory seems to be growing at the rate of 10 GB a month and is pushing against a hard capacity limit on the server.
The abnormal growth in perf data, may be due to deleted objects not being purged.
You might be a victim of bug 439756.Open a NGS case and clean up.
is there a way to control the size of the perf directory?
No.But you can control how long you what to retain the perf data for each counter group, and each individual filer using the cli or also the NMC.
dfm perf data modify [ -f ] -G <counter-group-name> [ -o <host-name-or-id> ]
[ -s <sample-rate> ] [ -r <retention-period> ]
Yes we have chatted about moving to a iscsi lun but even that has a limit and even with compression and dedup possibilities ( even all the magic in the world cannot grow 173 GB parition any larger ). The size of their current DFM install is around 120GB right now and in six months i expect the 173 GB disk to be filled. We have reduced the amount of DFM backups to once a day, we have evaluated each monitored controller to see if we really need perf stats collected ... yet the install base grows and grows ( slowly ).
will new versions of DFM ( on command, etc ) will introduce hard caps on database growth?
No.But did you would like to trim your db get a case open against the bug to see if you can cleanup some history tables.
Bug 447658
Regards
adai
Adai - good to hear from you.
I will look into these burts you mentioned; also we are running 4.0D2 with plans to goto 4.0.1; this should cover the burts ( hopefully )
If i limit a counter group, is it global to all objects in DFM
If i limit controllers, can i apply it to groups of controllers ( run the command against a DFM group? )
Emanuel
I will look into these burts you mentioned; also we are running 4.0D2 with plans to goto 4.0.1; this should cover the burts ( hopefully )
No.These burts are not fixed in 4.0.1
If i limit a counter group, is it global to all objects in DFM
As I said earlier, no as you can see in the cli its setting it perhost.
If i limit controllers, can i apply it to groups of controllers ( run the command against a DFM group? )
Yes, attached is the screen shot example from NMC.
Regards
adai
Hi emanuel,
you might want to check out the following (internel) KB:
https://kb.netapp.com/support/index?page=content&id=1011879
It describes how to identify and delete stale Performance Advisor data files.
It's a sisyphus work, but maybe worth it. Even within my demo environement (11 controllers) I was able to free up several GB.
regards, Niels