Active IQ Unified Manager Discussions

dfpm 5.0 Importing external relation into dataset of type "virtual"

mheimberg
7,092 Views

Hi all

IHAC with lots of SnapMirror relations containing volumes with ESX datastores mounted by NFS.

Now we have created in OnCommand a new dataset, attached policies and storage services to it and would like to import the existing relations into it.

But even though Protection Manager shows also the dataset and the VSM relation under "external relations" it is not possible to import the VSM: the dataset window is empty.

In DFM CLI  the dataset import command returns the error:

dfpm dataset import -D -N Mirror 1225 myFiler:/destination_volume

->"Cannot mix application policy  with non-virtualization object"w

How do I import the VSM then? A re-initialization is hardly possible (beside that it will create new volumes with _1, _2 etc at the end -> other topic: how do I stop DFM from counting up volumes?)

regards

Markus

10 REPLIES 10

adaikkap
7,050 Views

Hi Markus,

     Import of relationship into a Virtual dataset is not supported. Though there a some convoluted procedure to achieve it, using a combination of API and Cli. The question remains as to the support aspect of it as QA hasn't officially qualified it.

Coming to your second question of how to avoid the append of _1 _2 to a volume is not possible, as long as they are in the same dataset.

Regards

adai

mheimberg
7,050 Views

Dear Adai

thank you for the clear statements.

Could you provide me the mentioned procedure with API and CLI? Or at least an outline of the steps, an idea of the "how-to"?

I am very familiar with Ontap and features like Snapmirror / SnapVault and got in the meantime some experience with DFM and OnCommand as well....

regards

Markus

adaikkap
7,050 Views

Hi Markus,

               Giving the steps would be like opening Pandora's box, causing lot of issues for support, unless it comes a product request, I wont be able to post it. That too in a public forum.

Regards

adai

mheimberg
7,050 Views

Dear Adai

To come over the numbering issue (and for some other reasons as well) we completely uninstalled DFM 5.0D2 and reinstalled 5.0.1 and really started from scratch.

We succeded to create everything (policies, storage services, virtual datasets) and DFM created all the volumes and relations as we intended to do.

Then we realized that some application is still creating snapshots on the datasets volumes every 2 hours - exactly the schedule as we used during the testphase!

We found out, that the host service on the vCenter server  - which we have not re-installed - is responsible for those snapshots. Somewhere in any of the XML files host service seems to store the old configuration!

Is there a way to force DFM to push the new configuration (mainly schedules, because we used the same dataset-names, which contain the same volumes as before) to the host service?

Or can we manually edit some of the XML files (or whatever file holds the configuration)?

thanks in advance

Markus

kjag
7,050 Views

Hi Markus,

You can check the "PolicyEnforcementData.xml" file, to see whether its showing any older datasets, schedules, etc.

If the contains older schedules, datasets,etc..Please try the below.

1. Rename the PolicyEnforcementData.xml file

2. Unregister the Host Service from OnCommand using "dfm hs unregister -f <HS IP>"

3. Reregister it again.

4. New "PolicyEnforcementData" file should get created with the new schedules.

-KJag

mheimberg
7,050 Views

Hi KJag

Thanks for the input.

We already found the XML file before, found older datasets in it and renamed it for testing, but did not an unregister in DFM - hence it was no longer possible to run any job ...

So we un-installed the host package completely, did

dfm hs unregister -f <hs>

dfm hs register <host>

and while running an on-demand protection the "PolicyEnforcementData.xml" was created.

Since then everything works fine.

Surprisingly the ID of the HS stayed the same ...

regards

Markus

heinowalther
7,050 Views

Hi Adai

I think the new DFM with integrated "virtual" datasets is great, the problem is that the "old" way of doing SV with SMVI involved scripting and the traditional dataset types.

I have several customers wanting to "upgrade" their SMVI backup to the new DFM style, yet we cannot do it without a baseline SV.

We have tried several things like deleting the existing dataset so that the relation was shown in external relations, we then hoped that we were able to import it into the new virtual dataset, but this wasn't possible because this can only be done using the Management Console, which does not allow editing of virtual datasets...

So are we out of luck ?  Or is there a "fix" on the way ?

The baseline we could live with, but some of our customers does not have the extra space required to do this process.  Some have quite long retension on their SV, and doing a baseline would involve destroying the old volume, loosing the history of backups..

If you are not allowed to share the workaround here, do you think opening a case with support would provide us with the workaround ?

Kind regards,

Heino Walther

adaikkap
7,050 Views

Hi Heino,

     The import for virtual dataset is not supported and you are out of luck . The importing is not supported or qualified solution for virtual dataset.

Regards

adai

ONDREJ_STAVINOHA
5,970 Views

I am sorry to hear this, I am currently running a lab setup to test if we can replace SMVI+SV script using OUM and Host Package. However re-baselining and re-doing all our SV relationships because we cannot simply import it is a big deal.

Another thing is that I haven't discovered what sort of VMware snapshots are being created using Host package. Are these quiesced/ non quiesced? And is there actually some way of email alerting for datasets like for SMVI jobs?

For SMVI jobs I could easily get email if job has failed and which particular VMs failed to do a snapshot if quiesced snaps were selected. For DFM using datasets I guess the only way would be to set up DFM custom alert when dataset backup fails or is noncomformant. Or am I wrong?

pavila
7,050 Views

Upgrade to 5.1, that fixed a lot of the same issues I was having. However, it took about 24 hours for all the 'no status' and 'failed' snapvaults to finally complete.

Public