Active IQ Unified Manager Discussions

Performance Advisor DB location

leroy
4,064 Views

IHAC that would like to split the locations of the dim core database and the performance database.  From a high level, both are currently located on a NAS share and the database backup is taking over 4 hours to complete.  In order to reduce the size, they would like to host the core database locally (<5GB) and keep the performance database (>120GB) on a NAS share.

Is it possible to split the locations and still have a consistent backup that can restore dim?

6 REPLIES 6

adaikkap
4,064 Views

Yes, one can have different location for performance and database/db. You can use the following cli to move the db and perf data.

[adaikkap@ /]$ dfm datastore setup help

NAME

    setup -- configure DataFabric Manager server data on a different location

SYNOPSIS

    dfm datastore setup [ -n ] [ -f ] { dfm-data-dir | [ -d dbDir ]

[ -l dbLogDir ] [ -p perfArchiveDir ]

[ -s scriptDir ] [ -r reportsArchiveDir ]

[ -P pluginsDir ] }

DESCRIPTION

    -n specifies that the data present at target location will be used

        without copying original data.

    -f specifies that the data should be deleted from target location if it is not empty.

    dfm-data-dir specifies DataFabric Manager server target root directory for data.

    -d specifies the new location for database data file.

    -l specifies the new location for database transaction log file.

    -p specifies the new location for perf data files.

    -s specifies the new location for script output data.

    -r specifies the new location for report archival data.

    -P specifies the new location for Storage System configuration plugins.

    Example: dfm datastore setup /opt/dfmdata/

             dfm datastore setup -d /opt/dfm/data/ -p /opt/dfm/perf/ -s /opt/dfm/script/.

[adaikkap@ /]$

A typical db backup contains 3 things.

  1. database dir,
  2. perf data dir
  3. script pluign dir.

In order for a snapshot based backup all of the above 3 should be in the lun other wise only a .ndb can be taken and not .sndb.

Regards

adai

leroy
4,064 Views

If the databases are in unique locations, can the backup be restored without post-processing?

As an example, is this example supported:

1) DFM Core Database stored locally on a physical server

2) Performance Advisory database stored on NAS

Backup will be performed by disabling backup of performance data through DFM (to reduce the backup time).  The performance data will be backed up via snapshot before the scheduled backup of the core database.

Our current implementation of DFM has a total database size of ~120GB which takes over 4 hours to complete the backup.  Hosting on a NetApp LUN is not possible therefore we are looking for any other work arounds.

adaikkap
4,064 Views

Hi Roy,

     Short answer, there is no way to reduce the dfm backup time unless you go to a snapshot based backup using NetApp LUN and SnapDrive for Unix/Windows depending upon the DFM server OS flavor.

If the databases are in unique locations, can the backup be restored without post-processing?

Yes, but it would overwirte the perfdata and db, with whats  available in the backup.

Regards

adai

adaikkap
4,064 Views

Short answer is there is no alternative. Either you can move to snapshot based backup using netapp lun. Or spin off a new DFM server and disable the perf data on this server. Though there is no guarantee that this will bring down the backup times to less than 5mins like a snapshots based backup as during a db backup verification and validation of the db take long time as well.

adaikkap
4,064 Views

BTW what version of DFM are you using ?  Is it on a VM or a physical server ?

Regards

adai

niels
4,064 Views

Hi leroy,

5GB and 120GB respective sounds pretty big. How many NetApp controllers are you monitoring and are you leveraging any additional features like Provisioning or Protection Manager? How long is your DFM server productive?

Depending on the above it may very well be that the DB and the perf data directory contain stale data that bloat your backup.

As a quick check run the following commands and look for "Yes" in the deleted column:

dfm host list -a

dfm volume list -a

dfm qtree list -a

dfm lun list -a

If you see many deleted items, lets say 25-30% of the overall object count within the respective category, your DB and perf data should definitely be cleaned up.

regards, Niels

Public