Active IQ Unified Manager Discussions
Active IQ Unified Manager Discussions
Hi communitie,
In an environment with DFM 4 and SMO for Linux , I introduce a new controler and I migrate all the volumes on the new controler.
How to keep the existing snapmirror relationships and ensure that backups are always operational ?
Christophe
Solved! See The Solution
Hi Christophe,
This procedure is for migrating from 7-Mode to 7-Mode. But yes there will definitely be a procedure to migrate volumes from 7 to C-Mode, but I don't have that.
Regards
adai
Hi Christope,
Is the volume being moved is primary or secondary ? In other words Source or destination volumes ?
Regards
adai
Hi Adai,
i must migrate primary volumes (sources) to a new controler
Christophe
Thanks Adai, any suggestions ?
Hi Christophe - You just need to update your snapmirror.conf with the name of the new controller. This should allow the snapmirror to continue working.
Make sure that your snapmirror config on your new filer, options snapmirror are correct and the destination controller can access it.
Perform a snap list on a destination volume and then a source volume after migration. As long as the snapshots have not bee deleted, then snapmirror will be able to find a common snapshot as a reference point and will be able to update and resync without problems.
Hi Christophe,
Here is the procedure that we use.
High Level Steps/Flow
Step | Comments |
1 | dfm options set dpReaperCleanupMode=Never. |
2 | Validate no active jobs and relationships are idle, and suspend dataset |
3 | Relinquish all relationships using DFPM. |
4 | Remove all physical resources from the dataset (if the dataset contains multiple relationships, then only remove ones that need migration). |
5 | Relinquish the primary qtrees from the dataset using DFBM. |
6 | Migrate the data using technique 1 from NetApp KB 1011499 if you want to migrate the entire volume. Use technique 2 from the same kb if you only want to migrate a qtrees. In this doc the example used is of technique 1 for entire primary volume migration. But irrespective of technique 1 or 2 steps in PM remain the same. |
7 | Re-discover the storage controllers in DFM. |
8 | Add the physical resources back to the dfm using the DFBM. |
9 | Import the new relationships into the dataset. |
10 | Resume the dataset and test by doing an on-demand backup job |
11 | Revert the option back to orphans |
Detailed Steps
Current Environment:
Source | mpo-vsim14 | Snap Vault Source |
Destination | mpo-vsim15 | Snap Vault Destination |
New Source | mpo-vsim16 | New SnapVault Source |
This is how relationship looks like in Prot Mgr before we start.
C:\>dfpm dataset list -m UserHome
Id Node Name Dataset Id Dataset Name Member Type Name
---------- -------------------- ---------- -------------------- -------------------------------------------------- ---------------------------
1347 Primary data 1344 UserHome volume mpo-vsim14:/UserHome
1367 Backup 1344 UserHome volume mpo-vsim15:/UserHome
C:\> dfpm dataset list -R UserHome
Id Name Protection Policy Relationship Id State Status Hours Source Destination
---- --------- ------------------ -------------- ------------ ------- ----- ---------------------------- ----------------------------
1344 UserHome Back up 1371 snapvaulted idle 0.0 mpo-vsim14:/UserHome/- mpo-vsim15:/UserHome/UserHome_mpo-vsim14_UserHome
1344 UserHome Back up 1373 snapvaulted idle 0.0 mpo-vsim14:/UserHome/adai mpo-vsim15:/UserHome/adai
1344 UserHome Back up 1375 snapvaulted idle 0.0 mpo-vsim14:/UserHome/vsv mpo-vsim15:/UserHome/vsv
1344 UserHome Back up 1377 snapvaulted idle 0.0 mpo-vsim14:/UserHome/amir mpo-vsim15:/UserHome/amir
Step 1:
This is to make sure that during the migration conformance/reaper doesn’t reap any relationship
C:\>dfm options set dpReaperCleanupMode=Never
Changed cleanup mode for relationships managed by protection capability of OnCommand to Never.
C:\>
Step 2:
C:\>dfpm dataset suspend UserHome
Suspended dataset UserHome.
C:\>
Step 3:Relinquish Relationships using DFPM
From the DFM Server
C:\>dfpm dataset list -m UserHome
Id Node Name Dataset Id Dataset Name Member Type Name
---------- -------------------- ---------- -------------------- -------------------------------------------------- ---------------------------
1347 Primary data 1344 UserHome volume mpo-vsim14:/UserHome
1367 Backup 1344 UserHome volume mpo-vsim15:/UserHome
C:\> dfpm dataset list -R UserHome
Id Name Protection Policy Relationship Id State Status Hours Source Destination
---- --------- ------------------ -------------- ------------ ------- ----- ---------------------------- ----------------------------
1344 UserHome Back up 1371 snapvaulted idle 0.0 mpo-vsim14:/UserHome/- mpo-vsim15:/UserHome/UserHome_mpo-vsim14_UserHome
1344 UserHome Back up 1373 snapvaulted idle 0.0 mpo-vsim14:/UserHome/adai mpo-vsim15:/UserHome/adai
1344 UserHome Back up 1375 snapvaulted idle 0.0 mpo-vsim14:/UserHome/vsv mpo-vsim15:/UserHome/vsv
1344 UserHome Back up 1377 snapvaulted idle 0.0 mpo-vsim14:/UserHome/amir mpo-vsim15:/UserHome/amir
C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/UserHome_mpo-vsim14_UserHome
Relinquished relationship (1371) with destination UserHome_mpo-vsim14_UserHome (1370).
C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/adai
Relinquished relationship (1373) with destination adai (1372).
C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/vsv
Relinquished relationship (1375) with destination vsv (1374).
C:\>dfpm dataset relinquish mpo-vsim15:/UserHome/amir
Relinquished relationship (1377) with destination amir (1376).
C:\>
Step 4: Remove all Resources from Dataset using NMC/CLI
From the NetApp Management Console
Done in the NMC, edit the dataset, remove the physical resources first from the Backup Node, then from the Primary Node. Else do it using cli as follows
C:\>dfpm dataset remove -N "Primary data" UserHome mpo-vsim14:/UserHome
Dataset dry run results
----------------------------------
Do: Checking that dataset configuration conforms to its policy.
Effect: Conformance checking failed.
Reason: Dataset has been manually suspended.
Suggestion: Click Resume on the Datasets window to reestablish protection job schedules.
Dataset conformance status will be updated after you resume protection of this dataset
Removed volume mpo-vsim14:/UserHome (1347) from dataset UserHome (1344).
C:\>dfpm dataset remove -N "Backup" UserHome mpo-vsim15:/UserHome
Dataset dry run results
----------------------------------
Do: Checking that dataset configuration conforms to its policy.
Effect: Conformance checking failed.
Reason: Dataset has been manually suspended.
Suggestion: Click Resume on the Datasets window to reestablish protection job schedules.
Dataset conformance status will be updated after you resume protection of this dataser
Removed volume mpo-vsim15:/UserHome (1367) from dataset UserHome (1344).
C:\>
STEP 5: Relinquish the primary qtrees from the dataset using DFBM.
From the DFM Server CLI
C:\>dfbm primary dir list 1371
ID: 1371
Last Backup Status: Normal
Primary Directory: mpo-vsim14:/UserHome/-
Secondary Volume: mpo-vsim15:/UserHome
Secondary Volume ID: 1367
State: SnapVaulted
Lag: 29 mins
Status: Idle
Bandwidth Limit: None
Custom Script:
Run Custom Script As User:
C:\>dfbm primary dir relinquish 1371
Relinquished control over mpo-vsim14:/UserHome/-.
C:\>
Repeat this for all relationships.
Step6: Migrate the data using technique 1 from NetApp KB 1011499
Existing snapvault relationship:
Source Destination State Lag Status
mpo-vsim14:/vol/UserHome/- mpo-vsim15:/vol/UserHome/UserHome_mpo-vsim14_UserHome Snapvaulted 00:35:42 Idle
mpo-vsim14:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai Snapvaulted 00:35:42 Idle
mpo-vsim14:/vol/UserHome/amir mpo-vsim15:/vol/UserHome/amir Snapvaulted 00:35:42 Idle
mpo-vsim14:/vol/UserHome/vsv mpo-vsim15:/vol/UserHome/vsv Snapvaulted 00:35:42 Idle
Desired Snap vault relationship
Source Destination State Lag Status
mpo-vsim16:/vol/UserHome/- mpo-vsim15:/vol/UserHome/UserHome_mpo-vsim14_UserHome Snapvaulted 00:14:41 Idle
mpo-vsim16:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai Snapvaulted 00:13:33 Idle
mpo-vsim16:/vol/UserHome/amir mpo-vsim15:/vol/UserHome/amir Snapvaulted 00:27:23 Idle
mpo-vsim16:/vol/UserHome/vsv mpo-vsim15:/vol/UserHome/vsv Snapvaulted 00:16:26 Idle
Recipe
# Create a new volume
pri> vol create newvol aggr0 10g
pri> vol restrict newvol
pri> snapmirror initialize -S oldvol newvol
mpo-vsim16> vol create UserHome aggr1 80m
Creation of volume 'UserHome' with size 80m on containing aggregate
'aggr1' has completed.
mpo-vsim16> vol restrict UserHome
Volume 'UserHome' is now restricted.
mpo-vsim16> snapmirror initialize -S mpo-vsim14:UserHome UserHome
Transfer started.
Monitor progress with 'snapmirror status' or the snapmirror log.
# At time of cutover stop client access to oldvol and continue:
pri> snapmirror update -S oldvol newvol
pri> snapmirror break newvol
mpo-vsim16> snapmirror break UserHome
snapmirror break: Destination UserHome is now writable.
Volume size is being retained for potential snapmirror resync. If you would like to grow the volume and do not expect to resync, set vol option fs_size_fixed to off.
mpo-vsim16>
# Resume client access to newvol
# Update the SnapVault relationship
sec> snapvault modify -S pri:/vol/newvol/qtree sec:/vol/tradvol/qtree
sec> snapvault modify -S pri:/vol/newvol/qtree2 sec:/vol/tradvol/qtree2
If you like not to wait for next update schedule can you update it yourself using the following cli
sec> snapvault update /vol/tradvol/qtree
Alternatively you can also use start which will do the job of both modify and update.
Sec>
snapvault start –r –S mpo-vsim16:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai
mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/- mpo-vsim15:/vol/UserHome/UserHome_mpo-vsim14_UserHome
mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/adai mpo-vsim15:/vol/UserHome/adai
mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/amir mpo-vsim15:/vol/UserHome/amir
mpo-vsim15> snapvault modify -S mpo-vsim16:/vol/UserHome/vsv mpo-vsim15:/vol/UserHome/vsv
7 Re-Discover Controllers
C:\>dfm host discover mpo-vsim16
Refreshing data from host mpo-vsim9.mponbtme.lab.eng.btc.netapp.in (134) now.
C:\>dfm host discover mpo-vsim15
Refreshing data from host mpo-vsim10.mponbtme.lab.eng.btc.netapp.in (135) now.
C:\>
Wait for svtimestamp to be populated on both source and destination.
C:\>dfm detail mpo-vsim15 | findstr /i svTimestamp
svTimestamp 2012-08-08 19:15:02.000000
STEP8: Add the physical resources back to the dfm using the DFBM
From the DFM Server CLI
C:\>dfbm primary dir add mpo-vsim15:/UserHome mpo-vsim16:/UserHome/-
Started job 3608 to create backup relationship between mpo-vsim15:/UserHome and mpo-vsim16:/UserHome/-.
C:\>
This will not do a rebaseline as the relationship already exists we are only adding the same to dfm.
C:\>dfbm job list 3608
Job: 3608
Status: Normal
Time Started: 08 Aug 19:29
Description: Importing backup relationship between secondary mpo-vsim15:/UserHome and primary mpo-vsim16:/UserHome/-.
Progress: Done
Arguments: svsVolume=1367&svpHost=141&svpDir=%2FUserHome%2F-&svThrottle=0
C:\>
Take careful note of the syntax here. The Secondary / SnapVault volume goes at the beginning, with no qtree. No /vol syntax either. The primary SnapVault source goes last, and includes the qtree.
Step 9: Import the new relationships into the dataset.
From NetApp Management Console
Browse External Relationships and you will find both the new SnapVault. Import them to the correct points in the dataset.
PS: You may see this error by nothing to worry.
After importing the relationships, the dataset will show a status of Warning. When you investigate the warning, the dataset reports, "no backup history exists". To clear this warning event, either run an on-demand backup job or wait for the next scheduled backup to occur. Either way, the backup job will force an update of the relationship and create a new backup snapshot. Once the first backup for the imported relationship are succesful, the warning status will go away.
10 Resume the Dataset, Protect now, and Review jobs
We resume the dataset from the NMC, and hit Protect Now with an hourly job. When reviewing the jobs, we notice a few failed jobs from 30 minutes prior, this may be due to adding the resources back into the dataset (the primary and tertiary) which may have been unnecessary
Things to note as part of this migration.
Some of things you should be aware of are the following.
Regards
adai
Waaah Adai, super thank you 😉
Is that procedure will be available to migrate source controler from 7-mode to cluster-mode ?
Regards,
Christophe
Hi Christophe,
This procedure is for migrating from 7-Mode to 7-Mode. But yes there will definitely be a procedure to migrate volumes from 7 to C-Mode, but I don't have that.
Regards
adai