Short description of my environment: Data ONTAP 8.3.1, two clusters: clus1, clus2 (different physical locations)
In my environment I've got couple SVMs serving CIFS-only or CIFS/NFS data. Each primary_SVM (clus1) has it's secondary_SVM created on clus2 (different location) to which all volumes are replicating (via snapmirror). No customer access to secondary_SVMs. Both SVM subtypes are default. This configuration was created while those clusters were on ONTAP 8.2 (No "SVM_DR" available at that time). All volumes are being replicated, however secondary_SVM has no replication of CIFS shares or NFS export-policies. Please correct me if I'm wrong, but I understand such configuration as volume-level disaster recovery setup.
Lately a new SVM has been created, and since ONTAP version (8.3.1) gives us the possibility to create SVM DR relationship, the new_SVM (SVM subtype: default) in clus1 has it's new_SVM_DR (SVM subtype:dp-destination) on clus2. In this configuration all CIFS share information is also replicated to my DR site (clus2).
Now time for main main questions:
1. Is there any way I can transition/modify my old_secondary_SVMs (subtype: default) to DR_secondary_SVMs (subtype:dp-destination)? I would like to replicate CIFS shares information, and hopefully export policies for volumes and qtrees. What I would love is to avoid a need for data baseline transfers.
If there only way is a need to destory SVM_secondary and create a new one as subtype:dp-destination, but I can still keep the volumes on clus2 - I'll be able to resync volumes snapmirror without going through baseline again - that would be great!
2. My DR plan assumes: SVM_DR has a seperate CIFS server with seperate objet / different name joined to the same domain. During DR we would only create a DNS alias SVM_name (-alias->) SVM_DR_name. In such configuration we should set Identity-preserve to false - correct? How can I secure my NFS shares / export-policy information to DR site then? According to what I read, export-policy is only replicated if identity-preserve=true is set, but that would affect my CIFS config during DR.
@JGPSHNTAPthanks! You have a good point! It's my lack of understanding the subject I guess.
If I decide to keep the same CIFS server name on my DR - how does that work during the potential disaster?
My primary site is not available, then I start SVM on my secondary_site.. Should I join the CIFS server to domain under same name as it was on source, or is CIFS server will automatically start as a member of domain, since it was somehow replicated from source?
I'm also confused about LIF replication. If I keep identity-preserve=true it wil also replicate my data/NAS LIFs.. I guess it's not only the LIF name, but also the IP configuration. In such cases both source and destination must configured wit the same VLAN, so they can keep same IP addresses, right ?
After transition, volume level snapmirror can be converted to SVM level snapmirror, guess starting 9.X
Was able to get it working for a SVM which has NFS/CIFS and NO SAN on ontap 9.1 P10.
For this to work, volume level snapmirrors should be undisturbed.
create a new snapmirror relationship including both vservers with identity preserve option as true. Then a resync directly (initially will warn the DR SVM should be stopped) will get a new SVM level snapmirror by itself deleting volume level relationships.
Has anyone experienced the issue in 9.3P3 or above ?
Seems from 9.3 the default DP replaced by XDP. and i keep getting error message "Error: command failed: There are one or more volumes in this Vserver which do not have a volume-level SnapMirror relationship with volumes in Vserver "soure"."