Active IQ Unified Manager Discussions

OnCommand Backup Manager and Protection Manager

sheelnidhig
4,723 Views

Hello Guys,

I was wondering if we move the snapvault relation managed by Backup Manager to Protection manager,

and if we don't remove the schedule for the secondary volume from the backup manager, who shall trigger the backup Job ?

I am not sure if there is something wrong in the setup or this is a normal behavior.

--> We have imported couple of Backup relations (Snapvault) to protection manager and added a protection policy to a dataset which shall schedule the snapvault update everyday and maintains the snapshot retention on the secondary volume.

At the same time we missed to remove the DFM Backup manager schedule from the same secondary volume.

But I have noticed that there is no snapshot created by Backup Manager or we only have the snapshots created by protection manager.

Off-course this is very good, but I was wondering if both (BM and PM) should have triggered the backup job. and created a snapshot on the secondary with their own naming format.

(The schedule time on both BM and PM are different)

6 REPLIES 6

kryan
4,723 Views

Hello,

you did not mention the version of DFM/UM you are using, however all recent versions of the software will not modify the existing SV schedule on the controllers when importing a relationship into a dataset.

Therefore, if there is/was an existing schedule on the controllers driving the updates (regardless if DFM/UM was managing them), they should be manually removed to avoid firing off extra/unplanned backups.

I am not sure why you did not see those BizConn/Backup Manager based backups continuing to occur, but I would have expected them along with the Protection Manager based backups.

Kevin

sheelnidhig
4,723 Views

Its OnCommand  v 5.1

The jobs are not scheduled on the controller.

Is OnCommand that intelligent, if a relation is imported to PM, then it judges who shall manages the schedule for the relation.. ?

kryan
4,723 Views

No, UM 5.1 does not remove the ONTAP schedules upon import, so the schedules should be intact unless someone removed them.

Kevin

adaikkap
4,723 Views

Hi Sheel,

               Irrespective of the OCUM version, PM does not delete schedules on ontap for both SnapMirror as well as SnapVault relationships that are imported to Protection Manager.

If you notice, the last screen in the import wizard has a message about removing schedules if any.

But what surprises me is that BM schedule not triggering job. I do know that we have a check in DFM not to update a job in BM if pm is managing.

But I thought that works only for update jobs and not scheduled jobs. But anyways, let me do a test and confirm the behaviour.

All in all its always best to remove the schedule to avoid redundant updates.

Regards

adai

sheelnidhig
4,723 Views

Thanks Adai,

I need to check for the last window of the import wizard again !

and yes that would be great if you can please check at your end as well, because that surprises me why the BM schedule is not executed.

,Sheel

adaikkap
4,723 Views

Hi Sheel,

     I did a complete test and found why there was no conflict of schedules. First let me explain my setup and test then  throw more light on the same.

  1. Created a SnapVault Relationship between 2 filers using Backup Manager.
  2. Allowed the BM schedules to run for few time.
  3. Imported the same into a dataset
  4. Now the dataset schedule runs on the same.

I also learned in this process how Backup Manager works.


The schedules for BM managed relationships are maintained by dfm including the archive snapshot creation.

But only the retention settings are written on the snapvault destination volume.

mpo-vsim16> snapvault snap sched svSmDest

create svSmDest  0@-@0 preserve=default

create svSmDest dfm_sv_hourly 8@-@0 preserve=default,warn=0

mpo-vsim16>

If you notice the schedules in the volume there is no schedule times or day instead only retention settings are written.

If you notice typically on secondary a -xtranser schedules is created which will pull the changes from source, transfer the same and create an archive snapshot on secondary.

Below is the snippet from the snapvault command reference manual.

The third configuration step is to establish the SnapVault snapshot schedules on the primaries and the secondary with the snapvault snap sched command. A snapshot schedule in a volume creates and manages a series of snapshots with the same root name but a different extension such as sv.0, sv.1, sv.2, etc. (For snapshots on SnapLock secondary volumes, the extensions are representations of the date and time the snapshot was created rather than .0, .1, etc.). The primaries and secondary must have snapshot schedules with matching snapshot root names. On the secondary, the -x option to the snapvault snap sched command should be set to indicate that the secondary should transfer data from the primaries before creating the secondary snapshot. If -x is set, when the scheduled time arrives for the secondary to create its new sv.0 (or sv.yyyymmdd_hhmmss_zzz for SnapLock volumes) snapshot, the secondary updates each qtree in the volume from the sv.0 snapshot on the respective primary. Thus, the primaries and secondaries need snapshot schedules with the same base snapshot names. However, snapshot creation time and the number of snapshots preserved on the primary and secondary may be different.


But in case of BM managed relationships the schedules are managed by BM, snapshot creation is done by BM after successful transfer, only the snapshot retention is delegated to ONTAP via the create schedules with out any schedule times in it

Below are the details of the Relationship after imported to dataset.

[root@vmlnx221-118 log]# dfpm dataset list importedSv

Id         Name                        Protection Policy           Provisioning Policy Application Policy          Storage Service

---------- --------------------------- --------------------------- ------------------- --------------------------- ---------------

       999 importedSv                  Back up

[root@vmlnx221-118 log]# dfpm dataset list -R importedSv

Id         Name            Protection Policy     Relationship Id State        Status  Hours Source                   Destination

---------- -------------   ------------------    --------------- -------       ------- ------- ----- ------- ----------------------------

       999 importedSv       Back up               996                          snapvaulted  idle    1.2   mpo-vsim11:/svSrcNtn/qtOne   mpo-vsim16:/svSmDest/qtOne

[root@vmlnx221-118 log]#

mpo-vsim16> snap list  svSmDest

Volume svSmDest

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

22% (22%)    0% ( 0%)  Jul 05 11:07  2013-07-05_2200+0530_hourly_importedSv_mpo-vsim16_svSmDest_.-.qtOne

42% (31%)    0% ( 0%)  Jul 05 11:07  mpo-vsim16(4043456708)_svSmDest-base.1 (busy,snapvault)

58% (39%)    0% ( 0%)  Jul 05 10:07  2013-07-05_2100+0530_hourly_importedSv_mpo-vsim16_svSmDest_.-.qtOne

67% (39%)    0% ( 0%)  Jul 05 09:32  2013-07-05_2032+0530_weekly_importedSv_mpo-vsim16_svSmDest_.-.qtOne

73% (39%)    1% ( 0%)  Jul 05 09:29  2013-07-05_2029+0530_hourly_importedSv_mpo-vsim16_svSmDest_.-.qtOne

77% (40%)    1% ( 0%)  Jul 05 09:27  2013-07-05_2027+0530_daily_importedSv_mpo-vsim16_svSmDest_.-.qtOne

80% (39%)    1% ( 0%)  Jul 04 12:02  dfm_sv_hourly.0

82% (39%)    1% ( 0%)  Jul 04 11:02  dfm_sv_hourly.1

84% (39%)    1% ( 0%)  Jul 04 10:02  dfm_sv_hourly.2

85% (39%)    1% ( 0%)  Jul 04 09:02  dfm_sv_hourly.3

87% (39%)    1% ( 0%)  Jul 04 08:02  dfm_sv_hourly.4

88% (39%)    1% ( 0%)  Jul 04 07:02  dfm_sv_hourly.5

89% (39%)    2% ( 0%)  Jul 04 06:02  dfm_sv_hourly.6

89% (38%)    2% ( 0%)  Jul 04 05:02  dfm_sv_hourly.7

If you notice every BM schedule was skipped after the relationship was imported into the dataset. The only downside is that its written to a log file and not to any console.

So any relationship managed by PM are skipped by BM.

[root@vmlnx log]# cat dfbm.log

Jul 04 15:00:14 [dfbm: WARN]: [8706:0x7fc93f6ea740]: ndmputil_svs_set_snap_sched: old reply (type 0x20500307) unfreed; freeing it.

Jul 04 15:00:14 [dfbm:DEBUG]: [8706:0x7fc93f6ea740]: ndmputil_free_reply: freeing 0x4ae77a0 expected 0x4ae77d0 (type=0x20500306)

Jul 05 00:00:40 [dfbm: INFO]: [26014:0x7ffdfd053740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 01:00:44 [dfbm: INFO]: [2965:0x7fc6ed348740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 02:00:36 [dfbm: INFO]: [11801:0x7f665dc31740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 03:00:46 [dfbm: INFO]: [20699:0x7f6f03bda740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 04:00:41 [dfbm: INFO]: [1581:0x7f6f97ef1740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 05:00:39 [dfbm: INFO]: [10655:0x7f703587f740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 06:00:40 [dfbm: INFO]: [19512:0x7fc1b89db740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 07:00:39 [dfbm: INFO]: [28382:0x7f477042c740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 08:00:38 [dfbm: INFO]: [5349:0x7f3a7f107740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 09:00:41 [dfbm: INFO]: [14098:0x7fa13ccce740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 10:00:40 [dfbm: INFO]: [22937:0x7f7cf1c20740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 11:00:40 [dfbm: INFO]: [31880:0x7fedfe471740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 12:00:44 [dfbm: INFO]: [8782:0x7f635e0d6740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 13:00:44 [dfbm: INFO]: [17489:0x7f994ca5f740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 14:00:39 [dfbm: INFO]: [26340:0x7f4061ee1740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 15:00:39 [dfbm: INFO]: [3261:0x7f5b893f8740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 16:00:41 [dfbm: INFO]: [12085:0x7f47fcca7740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 17:00:36 [dfbm: INFO]: [20952:0x7f6875da0740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 18:00:33 [dfbm: INFO]: [29775:0x7f0a0ebbc740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 19:00:37 [dfbm: INFO]: [6680:0x7fc419a53740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 20:00:45 [dfbm: INFO]: [15694:0x7fb94ae04740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 21:00:37 [dfbm: INFO]: [24730:0x7f6f7e567740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

Jul 05 22:00:37 [dfbm: INFO]: [1310:0x7f8c7566b740]: Skipping mpo-vsim11:/svSrcNtn/qtOne as it is managed by Data Manager.

[root@vmlnx log]#

All in all its not necessary to remove the schedules from BM, but it makes the setup clean.

Also its not recommended to remove the "create snapvault snap sched " from the secondary controllers as it will take care of retering the old snapshot.

But since the retention is based on count and applies only to the snapshot with the same root names it will have any impact unless you make the retention to 0.

Regards

adai


Public