ONTAP Discussions

Move Snapvault Source without Rebaselining Destination in Protection Manager

berks
17,966 Views

Hi All,

IHAC who needs to move his source Snapvault volumes from one filer to another without needed to rebaseline the destination Snapvault volumes.

I know how to do this manually, but my customer is using Protection Manager to manage all the Snapvault relationships.

So, my question is, if I snapmirror the source Snapvault volume to another filer, and then restart the Snapvault relationship with the new volume, will Protection Manager recognize this?  If so, how?

If not, what else do I have to do in order to move the Snapvault source and continue to have it managed by Protection Manager?

Thanks

Jeff

8 REPLIES 8

adaikkap
17,966 Views

You will have to move the relationship out of the Dataset.Then do the step you mentioned in the filer using cli.

Once all the relationship are done(including the restart of the SV relationship), import them from the external relationship.

Till date there is no way in teh product to move primary volumes that are managed in PM in a seamless way without modifying dataset.

Regards

adai

abuchmann
17,967 Views

Whats the best way to move the relationship out of the dataset?

remove the source? destination? both? Just tried to remove both and the destination qtree got deleted (=rebaseline).

Kind regards,

Adrian

adaikkap
17,966 Views

Hi Adrian,

     Here is what you need to do in Protection Manager. I have answered for creator of this thread too.

For you directly use Step 2.

Step 1:Prevents PM’s reaper cleaning up any relationship.

Set the following options as below before doing and reset it back to orphans once done.

dfm options set dpReaperCleanupMode=Never

  

Step 2:Relinquish the Primary member and the secondary member.

Use the dfpm dataset relinquish

This will mark the relationship as external and PM will no longer manage(schedule jobs) the relationship.

Now remove the primary and secondary from the dataset.

Either using NMC UI Edit Dataset Wizard or using dfpm dataset remove cli.

First remove the primary member then remove the corresponding secondary member.

  

Step3:Discovering as External Relationships.

You must see this relationship as external in the External Relationship tab. If you don’t see it, close and re-login to NMC again,

Step4:Importing to a new dataset.

Create a new dataset with required policy and schedule. Or choose the dataset where you want to import this relationship to.

Use the Import wizard and import them.

   

Step 5:

dfm options set dpReaperCleanupMode=orphans.

 

Points to take care:

 

1.       If an entire OSSV host was added as a primary member, and now moved to a new dataset.(the step 2, relinquishing the primary member needs to be done for each dir/mntpath of the OSSV host.).The same applies for a volume which is added as a primary member and now moved to a new dataset.

2.       After importing the dynamic referencing of the OSSV host is lost as we import each individual relationships.The same applies for volumes too, now you will start seeing individual qtrees as primary members as opposed to volume.

3.       So when a new dir/mnt path is added to the OSSSV host, admin has to manually add it to the dataset.The same applies for new qtrees too.

4.       To restore from Old backup version the use must go back to the old dataset as they are not moved over.

   

                       

Note:

  • When you relinquish the relationship, it may not show-up in the "External Relationship Lag" box of the Dashboard view. However, if you go to Data -> External Relationships it will be listed there.
  • When you import the relationship into a new dataset, it will show an error status of "Baseline error". Simply run an on-demand backup job and it will clear this error. Note: The backup job doesn't perform a re-baseline. It simply does a Snapvault update.
  • Don't delete the old dataset if its empty. As adai stated, it has the backup history of the relationship before you moved it. So if you want to perform a restore from before the move, you need to restore from the old dataset.   Once all the backups have expired from the old dataset, you can destroy it.

Regards

adai

abuchmann
17,966 Views

Thank you for your response.

I have tried that, but i had some problems:

1. created the qsm-relationship

2. copied the snapvault-snapshot

3. changed the dpReaperCleanupMode to never

4. relinquished the destination (checked for dataset members with "dfpm dataset list -m <id>" and did a "dfpm dataset relinquish" for the destination. when i tried to do the same with the source, it returned: "Error: Could not find relationship information: No managed relationship with destination '131401' found."

5. removed the destination and the source via NMC

6. snapmirror update & break on the new volume

7. "snapvault modify", "snapvault start -r" & "snapvault update" on the snapvault destination

8. dfm host discover for all affected nodes.

But now, the modified snapvault-relationship does not appear in the NMC. The only visible relationship is the old one....

What am I doing wrong?

adaikkap
17,966 Views

Hi Adrian,

     Pls check this KB.1013796:How to rename a primary/secondary volume in a SnapVault relationship managed by PM dataset

Also you may be a victim of this bug as well. To confirm on the same try to import the old relationships back into dataset and see if it throws any error as follows ?

"There is no volume, qtree, LUN path, or OSSV directory named '44288'."

If so pls generate/create a case and add it to bug442664

Regards

adai

ADMINSTEWARJ
17,966 Views

Hi Adai,

I'm experiencing "There is no volume, qtree, LUN party, or OSSV directory named 'xxxxx'. as well where I'm trying to import relationships into a particular Dataset. This was a result of a volume being migrated and the original deleted. I had a look at Bug 44288 but there was no information associated with the Bug on the NOW site. Do you have any further information on this particular issue?

Thanks,

Josche

ADMINSTEWARJ
17,966 Views

FYI - It appears DFM 5.1 resolves this particular bug

adaikkap
17,966 Views

Hi

    Thats right, the bug577580 got fixed in 5.1 and now is back ported to 5.0.2P1 as well.

Regards

adai

Public