Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Team,
I have two environment(Prod,DR) and I have deployed app on Prod. I am planning to perform the failover from prod to DR with the help of trident. I have installed and configured the backend on both cluster and followed the below URL in order to implement the DR failover. I have stopped the SVN on prod and try create the backend in DR and discover the volume as per the steps described in below URL but unable to find the disk. Can you please help me to find the right approach in order to do that?
https://netapp-trident.readthedocs.io/en/stable-v19.04/dag/kubernetes/backup_disaster_recovery.html
Regards,
Bala
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
It's not clear what back-end you use (NFS, iSCSI, etc.) you use and whether you have two K8s clusters or one, so I won't go through all the possibilities.
Some things that may help you
1) Up to date Astra Trident docs are at https://docs.netapp.com/us-en/trident-2201/index.html (for v22.01, other recent versions are on the same site)
2) If your setup is similar to what's described in this post, you can try this
https://netapp.io/2019/10/21/trident-and-disaster-recovery-part-3/
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Configured OCP cluster, used trident-operator for installation and defined SC, backend discovery, PVCs.
We are using nfs and iscsi SVMs, nfs exports are getting discovered in the respective DR SVM, but for iscsi we could get the PVs but not getting it mapped to the igroup where it is configured as dynamic provisioning in source site.
Inorder to test the DR, we did as below as per the document:
https://netapp-trident.readthedocs.io/en/stable-v19.04/dag/kubernetes/backup_disaster_recovery.html
We did the snapmirror break and made the Destination SVM r/w, by stopping source SVM and starting the Destination SVM.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Okay, let's see if one of ONTAP focused folks can explain how to fix that problem.
