How to rename a SnapMirror volume in a "Mirror, then Backup" dataset in Protection Manager

by reide Former NetApp Employee on ‎2013-05-03 05:21 AM

Renaming a SnapMirror volume that is the source for SnapVault relationships can be tricky.  If not done correctly, you can lose the base snapshot for the SnapVault relationships and be forced to re-baseline your SnapVaults.  To make matters worse, if its managed my Protection Manager you have to ensure that the server learns of the change and behaves correctly. The following procedure can be used to safely rename a SnapMirror volume that is part of a "Mirror, then Backup" protection policy.  No re-baseline of the SnapMirror or SnapVault is required.

Description

     How do I rename a SnapMirror volume that has SnapVault relationships using it as the source?

     How do I rename a SnapMirror volume in a “Mirror, then Backup” relationship in Protection Manager?

Procedure

For this example:

FILER_A is the source controller,

FILER_B is the SnapMirror destination controller, and

FILER_C is the SnapVault destination controller.

FILER_A:primary_vol is the source volume.

FILER_B:old_mirror_vol is the SnapMirror destination volume.

FILER_C:backup_vol_01 is the SnapVault destination volume.

The current SnapMirror destination volume – old_mirror_vol - is going to be renamed to new_mirror_vol.

 

  1. In Protection Manager, suspend the dataset which uses the SnapMirror destination volume as a resource.  We don’t want SnapMirror or SnapVault updates attempting to run on the volumes while we’re performing these steps.
  2. Perform a 'snapmirror status' on FILER_B:old_mirror_vol.  Document the name of the source controller and path.
  3. Perform a manual snapmirror update on FILER_B:old_mirror_vol to make sure its up-to-date.

FILER_B> snapmirror update –S FILER_A:primary_vol  FILER_B:old_mirror_vol

  1. On FILER_C perform a 'snapvault update' on all the SV relationships originating from FILER_B:old_mirror_vol.

Filer_C> snapvault update FILER_C:/vol/backup_vol_01/primary_vol_-

Filer_C> snapvault update FILER_C:/vol/backup_vol_01/primary_vol_qtree1

Filer_C> snapvault update FILER_C:/vol/backup_vol_01/primary_vol_qtree2

Filer_C> snapvault update FILER_C:/vol/backup_vol_01/primary_vol_qtree3

NOTE:  Naming properties are user-customizable in Protection Manager.  As a result, your SnapVault paths may look different from the ones shown above.

  1. On Filer_B perform a snap list on old_mirror_vol.  Verify that the volume has snapshots for the purposes of SnapMirror and SnapVault.  Note that the snapshots contain the current name of the volume.

FILER_B> snap list old_mirror_vol

Volume old_mirror_vol

working...

  %/used       %/total  date          name

----------  ----------  ------------  --------

  0% ( 0%)    0% ( 0%)  May 01 15:55  FILER_B(4027617062)_old_mirror_vol.4 (snapvault)

  0% ( 0%)    0% ( 0%)  May 01 15:55  2013-05-01_1556-0500_daily

  0% ( 0%)    0% ( 0%)  May 01 15:48  FILER_B(4027617062)_old_mirror_vol.3

  0% ( 0%)    0% ( 0%)  May 01 15:22  2013-05-01_1523-0500_daily

  0% ( 0%)    0% ( 0%)  May 01 14:57  2013-05-01_1458-0500_daily

  0% ( 0%)    0% ( 0%)  May 01 14:03  2013-05-01_1404-0500_daily

  1. On Filer_B rename the SnapMirror volume.

Filer_B> vol rename old_mirror_vol new_mirror_vol

old_mirror_vol renamed to new_mirror_vol

  1. On Filer_B verify the status of the SnapMirror relationship for the newly renamed volume.  The source should now be shown as a dash.

Filer_B> snapmirror status new_mirror_vol

Snapmirror is on.

Source             Destination               State          Lag       Status

-                  FILER_B:new_mirror_vol    Snapmirrored   00:13:20  Idle

  1. On Filer_B update the SnapMirror relationship to point to the correct source volume.  This allows the status registry to be updated with the new volume name.

Filer_B> snapmirror update –S FILER_A:primary_vol FILER_B:new_mirror_vol

Transfer started.

Monitor progress with 'snapmirror status' or the snapmirror log.

  1. If the 'snapmirror update' does not work, use 'snapmirror break' and 'snapmirror resync' to resync the data between the source and destination.

Filer_B> snapmirror break new_mirror_vol
snapmirror break: Destination new_mirror_vol is now writable.

Filer_B> snapmirror resync -S FILER_A:primary_vol FILER_B:new_mirror_vol

  1. On Filer_B monitor the progress with 'snapmirror status' or by viewing the SnapMirror log.  Wait for the snapmirror relationship staus to become idle.
  2. On Filer_C perform a 'snapvault status'.  For each relationship that was using the old volume name as the source of the relationship (FILER_B:/vol/old_mirror_vol), update that relationship to point to the new volume name (FILER_B:/vol/new_mirror_vol).   The qtree names should all remain the same.

NOTE:  If any of the 'snapvault start' commands return an error of “Transfer aborted: destination is temporarily quiesced” just wait for the snapvault relationships to finish quiescing and then re-run that particular command.

Filer_C> snapvault start –r –S FILER_B:/vol/new_mirror_vol/-      FILER_C:/vol/backup_vol_01/primary_vol_-
Filer_C> snapvault start –r –S FILER_B:/vol/new_mirror_vol/qtree1 FILER_C:/vol/backup_vol_01/primary_vol_qtree1
Filer_C> snapvault start –r –S FILER_B:/vol/new_mirror_vol/qtree2 FILER_C:/vol/backup_vol_01/primary_vol_qtree2
Filer_C> snapvault start –r –S FILER_B:/vol/new_mirror_vol/qtree3 FILER_C:/vol/backup_vol_01/primary_vol_qtree3

NOTE:  Naming properties are user-customizable in Protection Manager.  As a result, your SnapVault paths may look different from the ones shown above.

  1. On Filer_C perform a 'snapvault status'. Verify that all of the SnapVault relationships that were referencing the old source volume name now reference the new source volume name.

Filer_C> snapvault status

Snapvault is ON.

Source                              Destination                                      State          Lag        Status

FILER_B:/vol/new_mirror_vol/-       FILER_C:/vol/backup_vol_01/primary_vol_-         Snapvaulted    0:04:08   Idle

FILER_B:/vol/new_mirror_vol/qtree1  FILER_C:/vol/backup_vol_01/primary_vol_qtree1    Snapvaulted    0:04:08   Idle

FILER_B:/vol/new_mirror_vol/qtree2  FILER_C:/vol/backup_vol_01/primary_vol_qtree1    Snapvaulted    0:04:08   Idle

FILER_B:/vol/new_mirror_vol/qtree3  FILER_C:/vol/backup_vol_01/primary_vol_qtree1    Snapvaulted    0:04:08   Idle

  1. On Filer_B, manually update the SnapMirror relationship for the newly renamed volume.  Do NOT skip this step!  This step is what tells the primary volume that a different base snapshot is being used for the SnapVault relationships.

Filer_B> snapmirror update –S FILER_A:primary_vol FILER_B:new_mirror_vol

Transfer started.

Monitor progress with 'snapmirror status' or the snapmirror log.

  1. On Filer_A, do a 'snap list' on the primary volume. Verify that new snapshots exist with the new volume name (new_mirror_vol), and that they are the source of the SnapMirror and SnapVault relationships.  Note that the old snapshots used for SnapMirror and SnapVault still exist.

Filer_A> snap list primary_vol
working...

%/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  May 01 16:28  FILER_B(4027617062)_new_mirror_vol.3 (snapmirror)
  0% ( 0%)    0% ( 0%)  May 01 16:15  FILER_B(4027617062)_new_mirror_vol.2 (snapvault)
  0% ( 0%)    0% ( 0%)  May 01 15:55  FILER_B(4027617062)_old_mirror_vol.4 (snapmirror)
  0% ( 0%)    0% ( 0%)  May 01 15:55  2013-05-01_1556-0500_daily
  0% ( 0%)    0% ( 0%)  May 01 15:48  FILER_B(4027617062)_old_mirror_vol.3 (snapvault)
  0% ( 0%)    0% ( 0%)  May 01 15:22  2013-05-01_1523-0500_daily
  1% ( 0%)    0% ( 0%)  May 01 14:57  2013-05-01_1458-0500_daily
  1% ( 0%)    0% ( 0%)  May 01 14:03  2013-05-01_1404-0500_daily

  1. On Filer_A run a snapmirror destinations command.  Find the old and new destination entries.

Filer_A> snapmirror destinations

Path        Destination
primary_vol Filer_B:old_mirror_vol
primary_vol Filer_B:new_mirror_vol

  1. On Filer_A run a 'snapmirror release' command on the OLD destination entry.  This releases any soft locks on snapshots that we no longer need on the primary volume.

Filer_A> snapmirror release primary_vol Filer_B:old_mirror_vol

  1. On Filer_A run a 'snap list' on the primary volume to verify the old SnapMirror and SnapVault snapshots have been released.

Filer_A> snap list primary_vol
working...

%/used       %/total  date          name
----------  ----------  ------------  --------
  0% ( 0%)    0% ( 0%)  May 01 16:28  FILER_B(4027617062)_new_mirror_vol.3 (snapmirror)
  0% ( 0%)    0% ( 0%)  May 01 16:15  FILER_B(4027617062)_new_mirror_vol.2 (snapvault)
  0% ( 0%)    0% ( 0%)  May 01 15:55  2013-05-01_1556-0500_daily
  0% ( 0%)    0% ( 0%)  May 01 15:22  2013-05-01_1523-0500_daily
  0% ( 0%)    0% ( 0%)  May 01 14:57  2013-05-01_1458-0500_daily
  1% ( 0%)    0% ( 0%)  May 01 14:03  2013-05-01_1404-0500_daily

  1. On Filer_B perform a manual update of the updated SnapMirror relationship.

Filer_B> snapmirror update –S FILER_A:primary_vol  FILER_B:new_mirror_vol

  1. On FILER_C perform a 'snapvault update' on all the SV relationships originating from FILER_B:new_mirror_vol.
  2. Update the Protection Manager (a.k.a. DFM or OnCommand Unified Manager) server to make it aware of all the changes that have occurred on each controller.  This step will update the dataset in Protection Manager to reflect the new volume name and all the updated SnapMirror and SnapVault relationships.

NOTE:  BE PATIENT!!!  This process can take a long time (10-15 minutes) depending on how large your environment is and how fast your DFM server is running.  Go to the restroom and grab another cup of coffee…

DFM_SVR  C:\>  dfm host list
ID   Type            Host Name      Host Address           ProductId    Deleted
--- --------------- -------------- ---------------------- ------------ --------
130 Controller      Filer_A        192.168.172.71         4027616951   No
131 Controller      Filer_B        192.168.172.72         4027617062   No
132 Controller      Filer_C        192.168.172.72         4027617062   No


In the output, find the ID number (first column) for Filer_A, Filer_B and Filer_C.  Using the ID numbers instruct the DFM server to force a polling update on all three controllers.

DFM_SVR  C:\>  dfm host discover 130
Refreshing data from host Filer_A (130) now.

DFM_SVR  C:\>  dfm host discover 131
Refreshing data from host Filer_B (131) now.

DFM_SVR  C:\>  dfm host discover 132
Refreshing data from host Filer_C (132) now.

  1. On the DFM server, query the Protection Manager dataset which uses the renamed volume and relationships to see if it’s been updated with the correct information.   Keep querying the dataset until you see every single volume AND relationship update with the correct information.  You can do this through the GUI or CLI interface.

NOTE: Updating the dataset can take 10-15 minutes after you re-discovered the hosts, so be patient!  If after 20 minutes you have not seen everything update correctly, repeat the dfm host discover command on all three controllers, wait another 10-15 minutes and repeat this step.  Do not proceed until the dataset reflects the correct information.

DFM_SVR  C:\> dfpm dataset list

DFM_SVR  C:\> dfpm dataset list –R <dataset_ID_number>

  1. Once you’ve verified the dataset has been completely updated, resume (un-suspend) the dataset in Protection Manager. This will allow scheduled backups to occur using the newly renamed mirror volume.
  2. At this point you can either initiate an on-demand protection, or wait for the next scheduled backup of the dataset to occur.

Comments

Awesome document.

Thank you.

Hi Reid, excellent description.

I was wondering how this works in the following situation:

Snapdrive/SnapmanagerSQL LUNs are located on an aggregate with 'outdated' 10Krpm FC disks. Installed and configured  new aggregate with 15Krpm disks.

Snapmirrored SMSQL volumes (f.i. smsql_db)  from old aggregate to new aggregate; disconnected old LUN's in Snapdrive and reconnected new LUN's with snapdrive.

Renamed old volumes (f.i. smsql_db_old) and took them offline, also renamed the new volumes  to the original names (smsql_db_snapmirror to smsql_db).

SQL now functioning okay on new aggregate.

Protection Manager Datasets are non-conformant now, mentioning to delete redundant relationships..... I can not find the proper way to remove these redundant relationships.

Frequent Contributor

Were these volumes being protected by integration with DFM/UM for Snapvault relationships?  Since SMSQL natively supports updating the VSM I assume this is an integration configuration for vaulting. 

Have you already rerun the SMSQL configuration wizard including the protection manager step?  SMSQL must update the application dataset prmary node for the new locations of the primary volumes.

Do you observe extra relationships within the dataset or only the original relationships?

Yes, these volumes were SV relationships, for which I created a Protection Policy. During first SMSQL configuration run, it integrates with this Prot.Pol and created a dataset. After the migration of the LUN's as I described, I re-ran the SMSQL configuration wizard and at first it looked like it accepted the config. When running a job in smsql it fails to run the SV replication. As I opened the Protection Manager it displayed the dataset as non-conformant, where the conformance option mentioned to remove the redundant relationship. When running the dfpm relationship list -r, it listed newly created qtrees (added same name as original relationship with addition of _1 at the end). Somehow DFM treats these new destinations as redundant. I have to admit that I messed up this relationship now, so I removed the dataset and recreated the relationship; it works fine now.   For the coming days I will have to do the same migration exercise for a couple of other Snapmanager (SQL and Exchange) hosts. Would be nice to have the right procedures in hand to repair the redundant relationship issue to prevent me to rebaseline every migrated host. thanks for your attention by the way.   

Frequent Contributor

I am not aware of any documentation on how to migrate integration/application dataset source volumes.  This is complicated as the integrating software (in this case SMSQL) must be aware of the changes so that it requests the correct updates against the primary locations within the primary node of the dataset (thus running the SMSQL Config Wizard) while at the same time DFM/UM must know the new relationships belong to the application dataset and no longer attempt to protect the old relationships.  More importantly, importing relationships into application datasets is not a supported action, however to complete this migration you would have to manually create the new relationships since DFM/UM only migrates secondary locations via the Secondary Space Migration Wizard.  Then you would need to relinquish the old relationships form the dataset and import the new relationships.

I think it might be possible complete a primary migration of integrated source volumes, however a last hurdle (assuming everything else had gone as hoped) is that the secondary volumes would most likely no longer be dynamically sized as needed by DFM/UM since imported volumes will not have that benefit - only volumes created by DFM/UM have the dp_managed flag set.  In this case the secondaries had not changed though (I think), so perhaps that flag might still be set on the volume. 

Thanks for your explanation. I found out, as you already mentioned, that I am not able to relinquish the primary data information. When trying to relinquish the related primary data id, the following error is displayed (understandable, as it is a source and not a destination) - "Error: Could not find relationship information: No managed relationship with destination '1615' found.".

I will keep experimenting with several methods; if I find one which works, I will post it up here. For now, the failed procedures leave me to having to delete the dataset and rebuild / rebaseline the complete relationship. Hope the already created schedules in the SMSQL application/SQL Agent will automatically takeover the new archiving/snapvault settings. Thanks for now.

Frequent Contributor

Sure, hope the explanation at least helps somewhat.

And as you found out, relinquish is done on the secondary location which then removes the relationship from the dataset.

PALEXOPOULOS

This is good stuff.

Taking SnapVault out of the equation do we follow the exact same steps, if we want to rename the primary volume, the mirror and then make sure Protection Manager is still happy?

Regards,

Paul

Exactly what I was looking for, perfect!!

Word to the wise, this cannot be done using System Manager (3.1), in order to edit/rename a SM destination volume, you have to break the SM relationship and therefore the process.

Warning!

This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.

In accordance to our Code of Conduct and Community Terms of Use DO NOT post or attach the following:

  • Software files (compressed or uncompressed)
  • Files that require an End User License Agreement (EULA)
  • Confidential information
  • Personal data you do not want publicly available
  • Another’s personally identifiable information
  • Copyrighted materials without the permission of the copyright owner

Files and content that do not abide by the Community Terms of Use or Code of Conduct will be removed. Continued non-compliance may result in NetApp Community account restrictions or termination.