Ask The Experts
Ask The Experts
Dear community experts,
I have a question regarding a migration from 7Mode controller to CDOT cluster.
Here is my scenario, composed by 3 NetApp system, only CIFS protocol enabled (300TB data):
SYS-A (7-mode)
SYS-B (CDOT)
SYS-C (CDOT)
Right now we are copying volumes from SYS-A to SYS-B using 7MTT. On SYS-B volumes are in “TMP” state and are not accessible.
In this state if I configure SVM DR from SYS-B to SYS-C, my understanding is that only SVM configuration will be snapmirrored to SYS-C but volumes will not be snapmirrored.
We are now planning to move ahead with Precutover Phase on 7MTT migration from SYS-A to SYS-B. In this phase volumes will go in Read\Write mode (available for testing) and here comes my question… will my volume start snapmirror to SYS-C using the configured SVM DR configuration?
The last step will be to go in the actual Cutover phase on 7MTT migration from SYS-B to SYS-C. Here the 7MTT will perform another incremental sync from SYS-A to SYS-B, setting volumes again in “TMP” state making them again not accessible.
After Cutover volumes will go offline on SYS-A, will go online and in production on SYS-B and I will go ahead performing incremental SVM DR from SYS-B to SYS-C. Does it make sense? Would this scenario work as I am guessing right now?
Please let mw know if anything is not clear and I will be more than happy to add more details.
Thank you and best regards,
Mattia
Solved! See The Solution
Hello Experts!
Just thought it was good to update this thread after my migration plan was completed.
In the end I was forced to activate my SVM-DR destinations only after the actual 7MTT cut-over.
Having the destination DR initialized or even quiesced returned lots of errors when I Closed the Testing phase with 7MTT, as the incremental Update would not work. This is because 7MTT does not accept having it's destination volumes being also source of other snapmirror relationship.
Anyway with our 10Gb connection between the PROD and DR site located over 500Km distance helped a lot. The 2 sites are now syncing every 30mins without any issue.
Thank you all for the support and feel free to contact me for any additional question regarding this topic
Hi,
The scenario you have explained is very clear.
I haven't setup SVM DR yet on Prod or test envn., but looking at the SVM DR express guide, one of the conditions mentioned is this:
The source SVM does not contain data protection (DP) volumes and transition data protection (Which you have already highlighted in your query) (TDP) volumes. Basically, until the source volumes are 'RW', SVM DR will not be operational.
SVM DR express guide:
https://library.netapp.com/ecm/ecm_download_file/ECMLP2496254
For example:
Creating a SVM DR is no big deal, but it will have no purpose until it is initialized.
I can create a SVM DR just like that on the DR cluster:
dr_cluster::> vserver create -vserver ds_vs1 -subtype dp-destination
[Job 136] Job succeeded:
Vserver creation completed.
-subtype dp-destination is the key for creating SVM DR.
I can even create the Peering:
dr_cluster::> vserver peer create -vserver ds_vs1 -peer-vserver source_vserver -applications snapmirror
[Job 137] Job succeeded:
Vserver creation completed.
I guess the next step would be to create 'snapmirror relationship' and 'Snapmirror initialize', creating a relationship should be ok, but it's the 'Initialize' phase when both the data and configuration replication starts. This is a key bit and depending upon the -identitypreserve option true/false, it will copy the configuration (Network & NAS settings info) and Data to DR SVM, but even with -identitypreserve false, volume data will be copied.
Coming to your scenario:
1) When you apply precutover (test) read/write mode, the volumes will appear as 'RW', but as this is a test mode, and considering the size (300TB), I will not initialize the SVM DR b'cos this will start the base-line transfer and depending upon the bandwidth and rate, it will prevent me from final cut-over until the base-line is completed. Therefore, I would rather do the necessary testing as needed during the 'test' mode and then do the final cut-over and once the volumes are in RW mode, I will initialize the SVM DR. By-default, it will initialize all the 'RW' volumes, therefore there is an option given which allows you to prevent any-volumes that you don't want to have it DR-protected.
I think if you want to test it , there is no harm, but you may have to stop the replication transfer and delete the relationship when you decide to do final cut-over. Recommendation would be to finish the 7mtt final cut-over and then initialize the SVM DR.
Hello and thanks a lot for you reply.
I've also opened a support case related to my scenario and I confirm you that they gave me pretty much the same reply.
I believe I will give a try to my supposed migration plan, theorically everything should work but of course as nobody seems to have tested this before I will see at the end of the migration.
Luckly my 300TB of data are being migrated to 3 separete SVMs on SYS-B, the smallest of the 3 is only 20TB and I will use this one as a test.
In the worst case I will reconfigure 7MTT and start again from the baseline.
We will have 10Gb connection to DR site setup during the next week . So I will start testing right after the connection will be in place.
I will come back and update this post with the result, maybe somebody else will
Thanks for the update. Your plan sounds good. Please do let us know how it goes.
Hello Experts!
Just thought it was good to update this thread after my migration plan was completed.
In the end I was forced to activate my SVM-DR destinations only after the actual 7MTT cut-over.
Having the destination DR initialized or even quiesced returned lots of errors when I Closed the Testing phase with 7MTT, as the incremental Update would not work. This is because 7MTT does not accept having it's destination volumes being also source of other snapmirror relationship.
Anyway with our 10Gb connection between the PROD and DR site located over 500Km distance helped a lot. The 2 sites are now syncing every 30mins without any issue.
Thank you all for the support and feel free to contact me for any additional question regarding this topic