Active IQ Unified Manager Discussions

OC 5.0 ProtectionManager Backup and DR Backup Policies

mark_schuren
4,563 Views

Hi ProtectionManager community.

I've got a question/problem, which a customer made me aware of with OCC 5.0 ProtectionManager.

I have a dataset that contains a primary volume used for file services (volume was manually assigned to primary node, and protection and secondary provisioning policy applied).

I tried with Protection Policy of type "Backup" (SnapVault) and also "DR Backup" (QSM).

I define a daily schedule at 10:00 pm for the primary node (primary snapshot), and also a schedule at 10:05 pm for the primary-to-secondary connection (using snapvault or qsm respectively).

What I'd expect PM to do is that it creates the primary snapshot (backup version at 10:00), and then transfers this latest snapshot (the daily backup created on primary) to the secondary node at 10:05. So the secondary contains the 10:00 pm version afterwards. Unfortunately it does not. It creates a new Snapvault or QSM baseline snapshot at 10:05 pm (calls it dfpm_base... it does not even follow naming conventions) and transfers this second snap over to secondary.

Questions:

1. Why?

2. How to change that?

This behaviour happens as long as the dataset is a "normal" dataset created manually using NMC 3.1. does not happen for "Application" datasets, e.g.  datasets which were created by SnapManagers or SnapCreator.

My customer would like the "normal" dataset to behave just like an "application" dataset (the transfer schedule "fetches" the last snapshot created by the primary node's schedule).

Any ideas

Mark

10 REPLIES 10

mark_schuren
4,498 Views

Backup history example from a snapcreator dataset:

Backup history example from a "normal" dataset:

I notice the different approaches.

But can I change this?

adaikkap
4,498 Views

Hi Mark,

             You exactly caught the difference. In protection manager there are two kinds of dataset.

Application Dataset.

Normal Dataset.

Application dataset does the equivalent of snapvault update -s <snapshotname> srcqtree   dst qtree because of which you get the same snapshot on the primary transferred to secondary.

Where as normal dataset does the equivalent of snapvault update srcqtree dst qtree, because of which we get different snapshot on primary and secondary.

In your example SnapCreator is an application dataset as it created by SC using api. An App dataset cant be created using any of the interfaces of PM except for API.

Your testlab is normal dataset which you would have created using dataset create/add wizard.

All app integration like SME. SMO,SMSQL etc create an application dataset with PM to transfer app consistent snapshot taken by the respective application.

Regards

adai

mark_schuren
4,498 Views

Hi Adai,

so the answer is that it's simply impossible to 'catch the latest primary snap' with a normal (non-application) dataset?

Why? This should be an option for a future version from my point of view.

Just my 2 cents,

Regards,

Mark

adaikkap
4,497 Views

Hi Mark,

     The the current product in place( till OnCommand 5.0) its not possible to transfer a named snapshot using a normal ( non-application) dataset. With respect to your suggestion for future version I shall pass on the information to the

product mangers, or they should have already seen this by this time.

But a question for you, why do you want named snapshot transfer for a non-application backup ? I understand the need if its an application. Can you elaborate your requirement and need to do it for the "TestLab" Dataset ?

Regards

adai

mark_schuren
4,497 Views

Hi Adai,

the need behind it is customer convenience and a certain amount of "feature-consistency".

My specific customer wants PM to behave that way.

E.g. create primary daily snapshots at a distinct point in time (say 10:00 pm). And "catch" this exact "latest daily snap" at a later time (say 01:00 am) with QSM or SV. Just like with application datasets.

At the moment the only solution seems to be Snapcreator (so that PM knows it is an application dataset)... but this adds admin overhead of course (first build SC config, then create dataset using SC, then manage dataset using PM, then create primary schedule in SC....). They'd love to simply use PM for their "non-application datasets", and ask for the functionality that's already there with "app datasets".

Would be very convenient if we had a "switch" that lets the admin decide how a dataset is handled - e.g. on the general properties tab. I think the development/QA efforts cannot be very huge to implement that, right? The functionality is already there. It's just not user-controllable.

Regards,

Mark

Or at least a CLI switch "dfpm dataset set ..." .

adaikkap
4,497 Views

Do they face any data loss or corruption due to this behavior of non-application dataset ? What is the necessity, if non-application serves all the purpose.

Regards

adai

mark_schuren
4,497 Views

They tell me they want the "10:00 pm" backup of ALL their datasets on ALL their storage systems (primary AND secondary). Not a 10:00 pm snap on the primary and an "whenever the QSM job starts" version on the secondary. When working with a moderate "max active transfers" setting, you never know which timestamp your backups will have.... The necessity is from real life perspective (in a big environment) this is needed. I agree it's not really needed in my lab environment...

My customer needs (requests) this for most of their file-shares datasets, which are also used from within their CRM app/database - there are cross-links etc.. For best-possible consistency they want to backup everything as "closely together" as possible (start initial snapshots at 10:00pm, regardless of dataset type), and transfer all these snapshots afterwards during a nightly transfer window.

It's simply a customer RFE that would make PM better fit for them. And especially with regards to QSM/SV jobs which might wait for quite a long time (because of max active transfers limit in larger environments), I personally think this is necessity enough. Right?

Regards,

Mark

adaikkap
4,497 Views

Hi Mark,

     I have passed on this to my product manger thanks for your description and explanations.

Regards

adai

mark_schuren
4,497 Views

Thanks for your effort.

Do you see to have a manual workaround soon? E.g. in a P- or D-release or so?

As I proposed, we could definitely live with setting the flag manually - like for example "dfpm dataset set IsApplication=yes"  , or something like that...?

adaikkap
3,742 Views

AS of now no. The only other work around I can think of is to create application data set and register the backups, but instead of registering the backup manually you can say dfm is responsible for primary backup. This need to be done using the SDK.

Regards

adai

Public