Tech ONTAP Blogs
Tech ONTAP Blogs
Many organizations have successfully extended the DevOps operational framework to cover application infrastructure by using Git as the single source of truth. This process has been coined “GitOps” and has a wide array of benefits, including increased productivity, improved security and compliance, increased reliability, and a built-in audit trail.
Argo CD is a popular GitOps tool in the market today, is entirely open source, and is currently a Cloud Native Computing Foundation (CNCF) graduated project. Argo CD is extremely easy to set up, has a robust built-in GUI, and is great at abstracting the complexities of Kubernetes. Developers only need to commit code to their Git repository, and Argo CD picks up on those changes and automatically synchronizes them to the relevant infrastructure.
Regardless of where your organization is on its DevOps and GitOps journey, your Kubernetes applications require strong application-aware data protection and disaster recovery, just like your traditional applications. NetApp® Trident™ protect software provides advanced data management capabilities that enhance the functionality and availability of stateful Kubernetes applications backed by storage systems running NetApp ONTAP® data management software and the proven Trident Container Storage Interface (CSI) storage provisioner. Trident protect simplifies the management, protection, and movement of containerized workloads across public clouds and on-premises environments. It also offers automation capabilities through its Kubernetes-native API and powerful tridentctl-protect CLI, enabling programmatic access for seamless integration with existing workflows.
However, manually creating Trident protect protection policies for the persistent applications after deployment, or manually backing up the volumes before application changes, is the antithesis of GitOps. These policies should instead be defined alongside our app definitions in our single source of truth, the Git repository. Thankfully, with Argo CD and Trident protect custom resource definitions (CRDs), defining application protection policies in your Git repository is a very simple process. To find out how, read on.
If you plan to follow this blog step by step, you need to have the following available:
We’ll use the getting started page to install Argo CD on our Kubernetes cluster, but if you already have Argo CD deployed in your environment, skip to the next section. On your workstation CLI, run the following commands against the Kubernetes cluster that will host the Argo CD application.
$ kubectl create namespace argocd
$ kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml
$ kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}'
These commands should produce a decent amount of output as they create the argocd namespace, apply the Argo CD Kubernetes manifest, and finally patch the argocd-server service type to be a load balancer for external access.
After a few moments, we’re ready to grab our access information for Argo CD. In this blog, we’ll mainly use Argo CD’s GUI to concisely present information and to minimize the number of installation dependencies, and use Argo CD’s CLI for adding clusters to Argo CD. However, in production workflows, you might want to use Argo CD's CLI for most operations.
$ kubectl -n argocd get svc argocd-server
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
argocd-server LoadBalancer 172.16.15.211 108.141.208.68 80:30642/TCP,443:30335/TCP 2m23s
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo
Copy the external IP value from the output of the first command and paste it into your web browser. Then sign in with the username admin and the password copied from the output of the second command.
Finally, install the Argo CD CLI on your workstation following the instructions for your OS and log in to the Argo CD server (do not use –-insecure in production).
$ argocd login 108.141.208.68 --insecure
Username: admin
Password:
'admin:login' logged in successfully
Context '108.141.208.68' updated
Now that we have Argo CD and its CLI installed, it’s time to set up our demo GitHub repository.
Rather than following the (stateless) example application on the getting started page of the Argo CD documentation, we’ll use a stateful application called Ghost, which is a popular content delivery platform. We’ll use two different ways of protecting a Ghost application with Trident protect; both application definitions are in this GitHub repository. It’s best to fork this repository to be able to make your own changes.
In your web browser, log in to GitHub, navigate to the patric0303/argocd-tridentprotect-v1 repository on GitHub, and then click the Fork button in the upper-right corner. Select your username as the owner, leave the repository name as argocd-tridentprotect-v1, optionally leave the description as is, and then click Create Fork.
Now clone the repository on your workstation (be sure to update the username) and change to the new directory.
$ git clone git@github.com:<YourGitUsername>/argocd-tridentprotect-v1.git
…
$ cd argocd-tridentprotect-v1
Now that our Git repository is up and running, let’s investigate the directories and YAML files in the repository.
We’ll cover two approaches to protect the persistent application with Trident protect (protecting the complete application and protecting the persistent volumes only), so we’ll be deploying two Ghost applications, based on the manifests in the ghost/ and ghost-dataonly/ directories.
The definition of the demo Kubernetes Ghost application is the same in both cases (except the namespaces) and reflected in the first two files:
In the ghost/ directory, we have three additional files that define the Trident protect CRs that we want Argo CD to automatically deploy together with the Ghost application. Let’s have a closer look at them.
The tp-application.yaml manifest defines the Trident protect application for management with Trident protect—in this case, we manage the complete ghost namespace:
apiVersion: protect.trident.netapp.io/v1
kind: Application
metadata:
name: ghost
namespace: ghost
spec:
includedNamespaces:
- namespace: ghost
The tp-protections.yaml manifest defines the Trident protect protection schedules. Here we define hourly, daily, weekly, and monthly snapshots and backups to protect the Ghost application regularly once it has been deployed:
apiVersion: protect.trident.netapp.io/v1
kind: Schedule
metadata:
name: ghost-hourly
namespace: ghost
spec:
appVaultRef: argocdtest
applicationRef: ghost
backupRetention: "1"
dayOfMonth: ""
dayOfWeek: ""
granularity: Hourly
hour: ""
minute: "50"
snapshotRetention: "1"
---
apiVersion: protect.trident.netapp.io/v1
kind: Schedule
metadata:
name: ghost-daily
namespace: ghost
spec:
appVaultRef: argocdtest
applicationRef: ghost
backupRetention: "1"
dayOfMonth: ""
dayOfWeek: ""
granularity: Daily
hour: "1"
minute: "0"
snapshotRetention: "1"
---
apiVersion: protect.trident.netapp.io/v1
kind: Schedule
metadata:
name: ghost-weekly
namespace: ghost
spec:
appVaultRef: argocdtest
applicationRef: ghost
backupRetention: "2"
dayOfMonth: ""
dayOfWeek: "7"
granularity: Weekly
hour: "2"
minute: "0"
snapshotRetention: "1"
---
apiVersion: protect.trident.netapp.io/v1
kind: Schedule
metadata:
name: ghost-monthly
namespace: ghost
spec:
appVaultRef: argocdtest
applicationRef: ghost
backupRetention: "1"
dayOfMonth: "1"
dayOfWeek: ""
granularity: Monthly
hour: "2"
minute: "20"
snapshotRetention: "1"
The last manifest, tp-exechooks.yaml, defines the Trident protect pre- and post-snapshot execution hooks that quiesce the MySQL database before taking a snapshot to make sure the snapshots and backups are application-consistent:
apiVersion: protect.trident.netapp.io/v1
kind: ExecHook
metadata:
name: pre-snapshot-mysql
namespace: ghost
spec:
action: Snapshot
stage: Pre
applicationRef: ghost
timeout: 1
arguments:
- pre
enabled:
true
hookSource: IyEvYmluL3NoCgojCiMgc3VjY2Vzc19zYW1wbGUuc2gKIwojIEEgc2ltcGxlIG5vb3Agc3VjY2VzcyBob29rIHNjcmlwdCBmb3IgdGVzdGluZyBwdXJwb3Nlcy4KIwojIGFyZ3M6IE5vbmUKIwoKCiMKIyBXcml0ZXMgdGhlIGdpdmVuIG1lc3NhZ2UgdG8gc3RhbmRhcmQgb3V0cHV0CiMKIyAkKiAtIFRoZSBtZXNzYWdlIHRvIHdyaXRlCiMKbXNnKCkgewogICAgZWNobyAiJCoiCn0KCgojCiMgV3JpdGVzIHRoZSBnaXZlbiBpbmZvcm1hdGlvbiBtZXNzYWdlIHRvIHN0YW5kYXJkIG91dHB1dAojCiMgJCogLSBUaGUgbWVzc2FnZSB0byB3cml0ZQojCmluZm8oKSB7CiAgICBtc2cgIklORk86ICQqIgp9CgojCiMgV3JpdGVzIHRoZSBnaXZlbiBlcnJvciBtZXNzYWdlIHRvIHN0YW5kYXJkIGVycm9yCiMKIyAkKiAtIFRoZSBtZXNzYWdlIHRvIHdyaXRlCiMKZXJyb3IoKSB7CiAgICBtc2cgIkVSUk9SOiAkKiIgMT4mMgp9CgoKIwojIG1haW4KIwoKIyBsb2cgc29tZXRoaW5nIHRvIHN0ZG91dAppbmZvICJydW5uaW5nIHN1Y2Nlc3Nfc2FtcGxlLnNoIgoKIyBleGl0IHdpdGggMCB0byBpbmRpY2F0ZSBzdWNjZXNzIAppbmZvICJleGl0IDAiCnNsZWVwIDMwMApleGl0IDA=
matchingCriteria:
- type: containerImage
value: mysql
---
apiVersion: protect.trident.netapp.io/v1
kind: ExecHook
metadata:
name: post-snapshot-mysql
namespace: ghost
spec:
action: Snapshot
stage: Post
applicationRef: ghost
arguments:
- post
enabled: true
hookSource: IyEvYmluL3NoCgojCiMgc3VjY2Vzc19zYW1wbGUuc2gKIwojIEEgc2ltcGxlIG5vb3Agc3VjY2VzcyBob29rIHNjcmlwdCBmb3IgdGVzdGluZyBwdXJwb3Nlcy4KIwojIGFyZ3M6IE5vbmUKIwoKCiMKIyBXcml0ZXMgdGhlIGdpdmVuIG1lc3NhZ2UgdG8gc3RhbmRhcmQgb3V0cHV0CiMKIyAkKiAtIFRoZSBtZXNzYWdlIHRvIHdyaXRlCiMKbXNnKCkgewogICAgZWNobyAiJCoiCn0KCgojCiMgV3JpdGVzIHRoZSBnaXZlbiBpbmZvcm1hdGlvbiBtZXNzYWdlIHRvIHN0YW5kYXJkIG91dHB1dAojCiMgJCogLSBUaGUgbWVzc2FnZSB0byB3cml0ZQojCmluZm8oKSB7CiAgICBtc2cgIklORk86ICQqIgp9CgojCiMgV3JpdGVzIHRoZSBnaXZlbiBlcnJvciBtZXNzYWdlIHRvIHN0YW5kYXJkIGVycm9yCiMKIyAkKiAtIFRoZSBtZXNzYWdlIHRvIHdyaXRlCiMKZXJyb3IoKSB7CiAgICBtc2cgIkVSUk9SOiAkKiIgMT4mMgp9CgoKIwojIG1haW4KIwoKIyBsb2cgc29tZXRoaW5nIHRvIHN0ZG91dAppbmZvICJydW5uaW5nIHN1Y2Nlc3Nfc2FtcGxlLnNoIgoKIyBleGl0IHdpdGggMCB0byBpbmRpY2F0ZSBzdWNjZXNzIAppbmZvICJleGl0IDAiCnNsZWVwIDMwMApleGl0IDA=
matchingCriteria:
- type: containerImage
value: mysql
In the /ghost-2 directory, we have the manifests for the case in which Trident protect protects only the persistent volumes of the Ghost application in namespace ghost-2. The key difference is in the definition of the Trident protect application tp-application.yaml:
apiVersion: protect.trident.netapp.io/v1
kind: Application
metadata:
name: ghost-dataonly
namespace: ghost-2
spec:
includedNamespaces:
- labelSelector:
matchLabels:
volume: persistent
namespace: ghost-2
We leverage Trident protect’s capability to use label selectors during application definition. The application definition just shown includes only resources in the ghost-2 namespace that match the volume: persistent label, and we labeled (only) the PVCs in the Ghost front-end and back-end manifests accordingly.
Apart from the application name and the namespace, the manifest for the Trident protect protection policies tp-protections.yaml is identical to the one in the ghost/ directory.
We don’t have a manifest for Trident protect execution hooks in ghost-dataonly/, because Trident protect manages only the PVCs in this case, so it can’t quiesce the application. Therefore, the snapshots and backups in the data-only protection approach will only be crash-consistent!
For application access, this Ghost application uses a domain name in the front-end deployment environment variable section. If you’re following along step by step, choose one of the following options:
Both options are highly dependent on several factors (DNS provider and host OS, respectively), so these steps are left up to you. If you’re using the first option, go ahead and make the domain name update in the frontend.yaml files (line 98) and commit those changes to your forked Git repository now.
Argo CD is agnostic on secret management, enabling administrators to use the secret manager of their choice through a wide range of integrations. If you’re using Argo CD in production, we highly recommend using one of the supported secret management tools.
This demo is focused on automatic application data protection, so we’re going to sidestep requiring setup and configuration of a secret manager. However, it’s a bad practice to put secrets into a Git repository (production or not), so we’ll manually define our secrets outside Argo CD and apply them through kubectl.
Let’s create our application passwords, which can be any value (minimum of 10 characters). Take note of the Ghost password value, because you’ll need it in a later step.
$ MYSQL_RPASSWORD=$(echo -n "ChangeToAnythingYouWant" | base64)
$ MYSQL_PASSWORD=$(echo -n "ChangeToAnythingYouWant2" | base64)
$ GHOST_PASSWORD=$(echo -n "ChangeToAnythingYouWant3" | base64)
$ cat <<EOF >secrets.yaml
apiVersion: v1
kind: Secret
metadata:
name: ghost-mysql
labels:
name: mysql
instance: ghost
type: Opaque
data:
mysql-root-password: $MYSQL_RPASSWORD
mysql-password: $MYSQL_PASSWORD
---
apiVersion: v1
kind: Secret
metadata:
name: ghost
labels:
name: ghost
instance: ghost
type: Opaque
data:
ghost-password: $GHOST_PASSWORD
EOF
Finally, we’ll create our Kubernetes namespaces and apply the secret file we just created.
$ kubectl create namespace ghost
$ kubectl -n ghost apply -f secrets.yaml
Repeat these secret creation steps for the ghost-2 namespace hosting the data-only version of the demo.
Now that we have Argo CD, the GitHub repository, and secrets created, we’re ready to deploy our demo Ghost applications. Head back to your browser and click the Create Application button in the middle of the Argo CD UI.
If you want to deploy the Ghost applications on a cluster different from the Argo CD cluster, like we do in the blog post, you need to add this cluster to Argo CD using the Argo CD CLI. You can skip this step if you want to deploy the demo applications on the same cluster as Argo CD.
To add the cluster aks-pu-ghost-cluster to Argo CD, issue an argocd cluster add command:
$ argocd cluster add aks-pu-ghost-cluster
WARNING: This will create a service account `argocd-manager` on the cluster referenced by context `aks-pu-ghost-cluster` with full cluster level privileges. Do you want to continue [y/N]? y
INFO[0003] ServiceAccount "argocd-manager" created in namespace "kube-system"
INFO[0003] ClusterRole "argocd-manager-role" created
INFO[0003] ClusterRoleBinding "argocd-manager-role-binding" created
INFO[0003] Created bearer token secret for ServiceAccount "argocd-manager"
Cluster 'https://akspu-ghost-2yuubndt.hcp.westeurope.azmk8s.io:443' added
Now the newly added cluster will appear in the Argo CD UI under Settings > Clusters:
In the wizard panel that appears, click the Edit As YAML button in the upper right, which will allow us to easily paste in our application definition.
Copy the following application definition from ghost/argo-app-definition.yaml and paste it into the browser text field:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ghost-demo
spec:
destination:
name: ''
namespace: ghost
server: 'https://kubernetes.default.svc'
source:
path: ghost
repoURL: 'https://github.com/patric0303/argocd-tridentprotect-v1'
targetRevision: HEAD
project: default
syncPolicy:
automated:
prune: true
selfHeal: false
If you use a separate cluster for the Ghost installations, edit the server value to the cluster’s URL. Also edit the repoURL to point at the GitHub repository you created earlier, and then click Save in the upper-right corner.
We can now verify that the fields in the application definition have been filled out by the YAML file, including our general information (name, project, sync options), the application source information (repository URL, branch information, and folder/path information), and destination cluster information. After verification, click Create at the top.
Argo CD now has a ghost-demo tile on the main application page.
We should see our application in a syncing state initially, and the application will begin deployment. The status of most objects should turn green, with the Ghost and MySQL pods taking the longest. After about 5 minutes, all the Ghost Kubernetes resources should be in a healthy state. To view the application in detail, click the tile. In the top half of the application details tree, you’ll see the resources of the deployed Ghost application:
When scrolling down, you can see that the Trident protect CRs (Application, the four schedules and the execution hooks) were also deployed by the Argo CD sync. In the following screenshot we already see a snapshot and a backup, which were triggered by the hourly protection schedule:
By clicking, for example, on the backup tile you can see the details of this specific Trident protect backup run:
To create our second Ghost application demo, where Trident protect only protects the PVCs, we proceed as just shown to create the Argo CD application ghost-demo-2. Copy the following application definition from ghost/argo-app-definition-dataonly.yaml and paste it into the Argo CD browser text field:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: ghost-demo-2
spec:
destination:
name: ''
namespace: ghost-2
server: 'https://kubernetes.default.svc'
source:
path: ghost-dataonly
repoURL: 'https://github.com/patric0303/argocd-tridentprotect-v1'
targetRevision: HEAD
project: default
syncPolicy:
automated:
prune: true
selfHeal: false
Again, modify the server and repoURL values as needed and create the application. Argo CD will start deploying the ghost-demo-2 application, and we’ll have a second application tile in the UI:
After a few minutes, the ghost-demo-2 application will also be fully deployed and protected, and the Argo CD app health status will switch to Healthy. Remember that Trident protect will back up only the PVCs of ghost-demo-2, as we only included resources with labels volume=persistent in the Trident protect application definition:
Imagine that our protection requirements for the ghost-demo application changed and we need to retain the last three instead of two weekly backups. Simply open ghost/tp-protections.yaml with your favorite text editor and change the backupRetention value of the ghost-weekly schedule from 2 to 3. When you’ve finished, run the following command and ensure that your output matches.
$ git status
On branch main
Your branch is up to date with 'origin/main'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git restore <file>..." to discard changes in working directory)
modified: ghost/tp-protections.yaml
no changes added to commit (use "git add" and/or "git commit -a")
Run the next two commands to push our changes to our Git repository:
$ git commit -a -m 'Updating weekly backupRetention from 2 to 3'
$ git push
In your browser window, monitor the Argo CD application page for any changes. By default, Argo CD polls the Git repository every 3 minutes, so that’s the maximum amount of time you’ll need to wait. The current sync status will briefly switch to OutOfSync, Argo CD will sync the changes to the ghost-demo application, and we’ll see the changes reflected in the last sync status.
We quickly confirm the successful change of the backupRetention value for the weekly backup by using kubectl:
$ kubectl -n ghost get schedule ghost-weekly -o yaml | yq '.spec.backupRetention'
3
Later in this blog, we’ll restore our application after the application is accidentally deleted by human error. To make it obvious that Trident protect has successfully restored our application rather than simply redeploying it, let’s modify our website.
First, gather the external IP of the Ghost load balancer service by using kubectl.
$ kubectl -n ghost get svc
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
ghost LoadBalancer 172.16.108.253 50.85.66.155 80:31977/TCP 23h
ghost-mysql ClusterIP 172.16.157.115 <none> 3306/TCP 23h
Depending on the DNS option you used earlier, either update the external IP value (50.85.66.155 here, but yours will be different) to your own domain, or add an entry to ghost-demo.ghost.pu-store.de in your local hosts file.
When the DNS change is complete, navigate to the site, where you should be presented with a default User’s Blog site.
Next, we’ll need to log in with the administrator credentials we defined earlier, which we can do by appending /ghost to our URL.
The email address should be user@example.com (set by the GHOST_EMAIL environment variable in the frontend.yaml file), and the password is the GHOST_PASSWORD value set earlier in the secret creation section. Click Sign In.
If the login is successful, you’ll see your site dashboard. Let’s edit the default “Coming soon” post by clicking Posts in the left column and then Coming Soon.
Change the “Coming soon” title to something more obvious—for example, “Argo CD – Trident protect Demo”—and optionally change the image. Then click Update.
Navigate back to the base URL and verify that the previous “Coming soon” post has been updated with our changes.
Now repeat the same steps for the data-only demo app ghost-demo-2 (URL ghost-demo-2.ghost.pu-store.de) and make the post look something like this:
Now that both our websites have been updated with an obvious change, let’s back each up with an on-demand backup to make sure we catch the correct state when doing some destructive restore tests later. Let’s use the tridentctl-protect CLI to first confirm the two Ghost application names and their namespaces:
$ tridentctl-protect get app -A
+-----------+----------------+------------+-------+-------+
| NAMESPACE | NAME | NAMESPACES | STATE | AGE |
+-----------+----------------+------------+-------+-------+
| ghost-2 | ghost-dataonly | ghost-2 | Ready | 21h2m |
| ghost | ghost | ghost | Ready | 1d |
+-----------+----------------+------------+-------+-------+
To create a backup ghost-backup-modified of the application ghost and follow its progress, run the following commands:
$ tridentctl-protect create backup ghost-backup-modified --app ghost --appvault argocdtest -n ghost
Backup ghost-backup-modified created.
$ kubectl -n ghost get backup ghost-backup-modified -n ghost -w
NAME STATE ERROR AGE
ghost-backup-modified Running 41s
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 10m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Running 11m
ghost-backup-modified Completed 11m
In the same way, create a backup ghost-dataonly-backup-modified for the data-only app ghost-dataonly with the modified blog post content in the ghost-2 namespace:
$ tridentctl-protect create backup ghost-dataonly-backup-modified --app ghost-dataonly --appvault argocdtest -n ghost-2
Backup ghost-dataonly-backup-modified created.
Our two Kubernetes applications have now been successfully deployed, managed, and protected, all through GitOps principles. Let’s continue to see how restoration and cloning work if a disaster or planned migration occurs.
It’s unfortunately too common for Kubernetes administrators to accidentally delete a namespace, often through a mix-up in contexts (for example, see these articles on Stack Overflow, Reddit, and Medium). With stateless applications and GitOps principles, it’s a simple process to restore your applications, potentially in a fully automated fashion. In the past, stateful applications have been much more complicated (or impossible) to restore; however, Trident protect makes stateful restoration almost as simple as stateless restoration.
In the following sections, we’ll show you possible options for recovering from application failure with Argo CD and NetApp Trident protect, for both the scenario in which Trident protect protects the full application and the scenario in which it protects only the persistent part (PVC). Choose whatever option suits your needs best.
Let’s start by “accidentally” deleting our ghost namespace hosting the Ghost application:
$ kubectl delete namespace ghost
namespace "ghost" deleted
It might take a couple of minutes for the prompt to be returned, as every Kubernetes resource in the namespace is first deleted. Head over into the Argo CD UI, and we’ll see the application in a Missing and OutOfSync state.
Because Trident protect stores its CRs in the application’s namespace, the Trident protect application and backup CRs aren’t available anymore after the namespace is deleted. But because we kept the reclaimPolicy setting for the Trident protect backups to its default value of Retain, the backups weren’t deleted from the object storage when the namespace was deleted. We list the available backups with this tridentctl-protect command:
$ tridentctl-protect get appvaultcontent argocdtest --app ghost --show-paths
+----------------------+-------+--------+-----------------------------------------------+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| CLUSTER | APP | TYPE | NAME | TIMESTAMP | PATH |
+----------------------+-------+--------+-----------------------------------------------+---------------------------+---------------------------------------------------------------------------------------------------------------------------------------+
| aks-pu-ghost-cluster | ghost | backup | daily-ca5b8-20250219010000 | 2025-02-19 01:07:58 (UTC) | ghost_1b1ba069-8073-4094-89c4-2ea2204c2c8e/backups/daily-ca5b8-20250219010000_38f5f15f-436a-492a-82df-09b8893527d9 |
| aks-pu-ghost-cluster | ghost | backup | ghost-backup-modified | 2025-02-18 15:04:34 (UTC) | ghost_1b1ba069-8073-4094-89c4-2ea2204c2c8e/backups/ghost-backup-modified_32e54352-cccb-4f8d-a6a4-e25c7bf2d4e1 |
| aks-pu-ghost-cluster | ghost | backup | hourly-59f31-20250219075000 | 2025-02-19 07:58:25 (UTC) | ghost_1b1ba069-8073-4094-89c4-2ea2204c2c8e/backups/hourly-59f31-20250219075000_d1e80014-faf6-4b82-8642-1a646b424900 |
| aks-pu-ghost-cluster | ghost | backup | hourly-59f31-20250219085000 | 0001-01-01 00:00:00 (UTC) | ghost_1b1ba069-8073-4094-89c4-2ea2204c2c8e/backups/hourly-59f31-20250219085000_9f33bb76-124b-4648-a1c5-793487289360 +-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------+
Take note of the PATH value of our ghost-backup-modified backup, because we’ll need it for the restores later. Ideally, store it in a variable:
$ BACKUPPATH=ghost_1b1ba069-8073-4094-89c4-2ea2204c2c8e/backups/ghost-backup-modified_32e54352-cccb-4f8d-a6a4-e25c7bf2d4e1
Now we’ll explore several options for restoring the Ghost application from Argo CD and Trident protect.
The first option for restoring your application from accidental namespace deletion is to first initiate a partial synchronization from Argo CD, excluding the PVCs, and then restore from the Trident protect backup. In the Argo CD UI, navigate to the failed application and click the Sync button. Then deselect the PVCs from the resources to synchronize, and make sure to select the Auto-Create Namespace option to let Argo CD re-create the deleted ghost namespace.
After you click Sync, Argo CD will synchronize the resources:
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost
NAME READY STATUS RESTARTS AGE
pod/ghost-fb7bd4f5c-q99rd 0/1 Pending 0 3m34s
pod/ghost-mysql-69546fc5b5-v48hr 0/1 Pending 0 3m34s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.180.202 9.163.141.51 80:32051/TCP 3m34s
service/ghost-mysql ClusterIP 172.16.132.24 <none> 3306/TCP 3m34s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 0/1 1 0 3m34s
deployment.apps/ghost-mysql 0/1 1 0 3m34s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-fb7bd4f5c 1 1 0 3m34s
replicaset.apps/ghost-mysql-69546fc5b5 1 1 0 3m34s
Now we start a BackupRestore from the tridentctl-protect CLI into the original application namespace ghost and follow its progress:
$ tridentctl-protect create backuprestore --appvault argocdtest --path $BACKUPPATH --namespace-mapping ghost:ghost -n ghost
BackupRestore "ghost-1xuoes" created.
~$ kubectl -n ghost get backuprestore ghost-1xuoes -w
NAME STATE ERROR AGE
ghost-1xuoes Running 12s
ghost-1xuoes Running 3m22s
ghost-1xuoes Running 3m22s
ghost-1xuoes Running 3m24s
ghost-1xuoes Running 3m25s
ghost-1xuoes Running 3m25s
ghost-1xuoes Running 3m25s
ghost-1xuoes Running 3m25s
ghost-1xuoes Running 3m33s
ghost-1xuoes Running 3m33s
ghost-1xuoes Running 3m33s
ghost-1xuoes Running 3m33s
ghost-1xuoes Running 3m37s
ghost-1xuoes Running 3m37s
ghost-1xuoes Running 3m37s
ghost-1xuoes Running 3m37s
ghost-1xuoes Running 3m38s
ghost-1xuoes Running 3m38s
ghost-1xuoes Running 3m38s
ghost-1xuoes Completed 3m38s
Trident protect only restores resources that don’t yet exist in the target namespace (this will become a configurable option in future versions), so only the PVCs and the secrets will be restored, and the Ghost application will come up after a few minutes.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost
NAME READY STATUS RESTARTS AGE
pod/ghost-fb7bd4f5c-q99rd 1/1 Running 3 (32s ago) 16m
pod/ghost-mysql-69546fc5b5-v48hr 1/1 Running 0 16m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.180.202 9.163.141.51 80:32051/TCP 16m
service/ghost-mysql ClusterIP 172.16.132.24 <none> 3306/TCP 16m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 1/1 1 1 16m
deployment.apps/ghost-mysql 1/1 1 1 16m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-fb7bd4f5c 1 1 1 16m
replicaset.apps/ghost-mysql-69546fc5b5 1 1 1 16m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-fdf7bda3-c354-4723-b2dd-55f5164662df 100Gi RWX azure-netapp-files-standard <unset> 4m7s
persistentvolumeclaim/mysql-pv-claim Bound pvc-b5e5bd86-40d4-43c5-8376-027e244f1bc4 100Gi RWO azure-netapp-files-standard <unset> 4m7s
NAME TYPE DATA AGE
secret/ghost Opaque 1 4m1s
secret/ghost-mysql Opaque 2 4m1s
After updating our DNS entry with the new external IP, we can confirm that we restored to the correct (modified) version of the blog post:
It’s worth noting that, because the Trident protect execution hooks and protection schedules are part of the Argo CD application definition, the Argo CD synchronization also restored them, so Trident protect will resume the scheduled application-consistent snapshots and backups for the restored application.
In a production environment, you would most likely not restore the secrets from the backup (for example, by using exclude filters in the restore command); instead, but you would use your external secrets manager.
Instead of doing partial Argo CD synchronization, we can also do a full one, followed by a Trident protect BackupInplaceRestore. After again deleting the ghost namespace, navigate to the ghost-demo app in the Argo CD UI. Click Sync, make sure to select the Auto-Create Namespace option to let Argo CD re-create the deleted ghost namespace, and start the synchronization.
Because the secrets aren’t available yet, the Ghost and MySQL pods won’t start, and although the PVCs exist now, they don’t contain the correct content.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost
NAME READY STATUS RESTARTS AGE
pod/ghost-fb7bd4f5c-fbfxt 0/1 CreateContainerConfigError 0 44s
pod/ghost-mysql-69546fc5b5-khhmm 0/1 CreateContainerConfigError 0 44s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.45.55 50.85.122.186 80:31003/TCP 45s
service/ghost-mysql ClusterIP 172.16.117.76 <none> 3306/TCP 45s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 0/1 1 0 44s
deployment.apps/ghost-mysql 0/1 1 0 44s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-fb7bd4f5c 1 1 0 44s
replicaset.apps/ghost-mysql-69546fc5b5 1 1 0 44s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-8af53e58-ee54-455c-aa0b-5e8c40e773aa 100Gi RWX azure-netapp-files-standard <unset> 45s
persistentvolumeclaim/mysql-pv-claim Bound pvc-52f02953-47c2-4df1-86ae-8d683f3a0350 100Gi RWO azure-netapp-files-standard <unset> 45s
We see that the synchronization also restored the Trident protect CRs, especially the application CR:
$ tridentctl-protect get application -n ghost
+-------+------------+-------+-------+
| NAME | NAMESPACES | STATE | AGE |
+-------+------------+-------+-------+
| ghost | ghost | Ready | 3m35s |
+-------+------------+-------+-------+
With the application CR for the ghost application available in the ghost namespace, we can now also use Trident protect’s BackupInplaceRestore mechanism and restore the complete Trident protect backup into the same namespace:
$ tridentctl-protect create backupinplacerestore --appvault argocdtest --path $BACKUPPATH -n ghost
BackupInplaceRestore "ghost-d4ump2" created.
$ kubectl -n ghost get BackupInplaceRestore ghost-d4ump2 -w
NAME STATE ERROR AGE
ghost-d4ump2 Running 11s
ghost-d4ump2 Running 14s
ghost-d4ump2 Running 14s
ghost-d4ump2 Running 23s
ghost-d4ump2 Running 23s
ghost-d4ump2 Running 2m52s
ghost-d4ump2 Running 2m52s
ghost-d4ump2 Running 2m52s
ghost-d4ump2 Running 3m1s
ghost-d4ump2 Running 3m1s
ghost-d4ump2 Running 3m1s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Running 3m9s
ghost-d4ump2 Completed 3m9s
The above command restores the complete backup content into the ghost namespace and replaces already-existing resources and the content of the persistent volumes.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost
NAME READY STATUS RESTARTS AGE
pod/ghost-fb7bd4f5c-5n7mz 1/1 Running 4 (4m26s ago) 9m9s
pod/ghost-mysql-69546fc5b5-g8znx 1/1 Running 0 9m9s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.28.31 50.85.122.238 80:32045/TCP 9m9s
service/ghost-mysql ClusterIP 172.16.189.37 <none> 3306/TCP 9m9s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 1/1 1 1 9m9s
deployment.apps/ghost-mysql 1/1 1 1 9m9s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-fb7bd4f5c 1 1 1 9m9s
replicaset.apps/ghost-mysql-69546fc5b5 1 1 1 9m9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-8af53e58-ee54-455c-aa0b-5e8c40e773aa 100Gi RWX azure-netapp-files-standard <unset> 9m15s
persistentvolumeclaim/mysql-pv-claim Bound pvc-52f02953-47c2-4df1-86ae-8d683f3a0350 100Gi RWO azure-netapp-files-standard <unset> 9m15s
NAME TYPE DATA AGE
secret/ghost Opaque 1 9m10s
secret/ghost-mysql Opaque 2 9m10s
After a few minutes, the application will come up and appear as Healthy and Synced in Argo CD.
In a production environment, you would most likely not restore the secrets from the backup (for example, by using exclude filters in the restore command); instead, you would use your external secrets manager.
After updating our DNS entry with the new external IP again, we can confirm that we restored to the correct (modified) version of the blog post:
Instead of starting the recovery process with an Argo CD synchronization, we can also start with restoring from the Trident protect backup. Again, delete the ghost namespace to start the failure scenario:
$ kubectl delete ns ghost
namespace "ghost" deleted
Now use tridentctl-protect create backuprestore to restore from the backup into the ghost namespace. The command will also create the namespace.
$ tridentctl-protect create backuprestore --appvault argocdtest --path $BACKUPPATH --namespace-mapping ghost:ghost -n ghost
BackupRestore "ghost-4oatex" created.
We follow the backuprestore progress and after some minutes, the restore finishes.
$ kubectl -n ghost get BackupRestore ghost-4oatex -w
NAME STATE ERROR AGE
ghost-4oatex Running 11s
ghost-4oatex Running 3m26s
ghost-4oatex Running 3m26s
ghost-4oatex Running 3m32s
ghost-4oatex Running 3m32s
ghost-4oatex Running 3m32s
ghost-4oatex Running 3m32s
ghost-4oatex Running 3m32s
ghost-4oatex Running 3m36s
ghost-4oatex Running 3m36s
ghost-4oatex Running 3m37s
ghost-4oatex Running 3m37s
ghost-4oatex Running 3m42s
ghost-4oatex Running 3m42s
ghost-4oatex Running 3m42s
ghost-4oatex Running 3m42s
ghost-4oatex Running 3m42s
ghost-4oatex Running 3m42s
ghost-4oatex Running 3m42s
ghost-4oatex Completed 3m42s
The Ghost application will come up after a few more minutes, because the backuprestore command also restored the secrets:
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost
NAME READY STATUS RESTARTS AGE
pod/ghost-fb7bd4f5c-czk67 1/1 Running 4 (3m13s ago) 8m4s
pod/ghost-mysql-69546fc5b5-6r6bf 1/1 Running 0 8m4s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.174.42 9.163.185.52 80:32121/TCP 8m4s
service/ghost-mysql ClusterIP 172.16.69.221 <none> 3306/TCP 8m4s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 1/1 1 1 8m4s
deployment.apps/ghost-mysql 1/1 1 1 8m4s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-fb7bd4f5c 1 1 1 8m4s
replicaset.apps/ghost-mysql-69546fc5b5 1 1 1 8m4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-1b257e2b-57b8-4227-8dc8-c0b4aa113b0e 100Gi RWX azure-netapp-files-standard <unset> 8m6s
persistentvolumeclaim/mysql-pv-claim Bound pvc-63336661-0e7d-440b-a3d5-534ba8559e8a 100Gi RWO azure-netapp-files-standard <unset> 8m6s
NAME TYPE DATA AGE
secret/ghost Opaque 1 8m4s
secret/ghost-mysql Opaque 2 8m4s
The execution hooks were also restored by Trident protect:
$ tridentctl-protect get exechook -n ghost
+---------------------+-------+----------------------+----------+-------+---------+-------+-------+
| NAME | APP | MATCH | ACTION | STAGE | ENABLED | AGE | ERROR |
+---------------------+-------+----------------------+----------+-------+---------+-------+-------+
| post-snapshot-mysql | ghost | containerImage:mysql | Snapshot | Post | true | 9m31s | |
| pre-snapshot-mysql | ghost | containerImage:mysql | Snapshot | Pre | true | 9m31s | |
+---------------------+-------+----------------------+----------+-------+---------+-------+-------+
Although the app health in Argo CD now is Healthy, the sync status is still OutOfSync. This is because the Trident protect restore doesn’t restore the protection schedules, which are part of the Argo CD application definition.
We can easily fix this by starting a partial synchronization from Argo CD. Select Apply Out of Sync Only in the synchronization details and start the synchronization.
This will re-create the protection schedules, and the Argo CD sync status will change to Synced.
$ tridentctl-protect get schedule -n ghost
+---------------+-------+------------------------------------+---------+-------+------+-------+
| NAME | APP | SCHEDULE | ENABLED | STATE | AGE | ERROR |
+---------------+-------+------------------------------------+---------+-------+------+-------+
| ghost-daily | ghost | Daily:hour=1,min=0 | true | | 9m6s | |
| ghost-hourly | ghost | Hourly:min=50 | true | | 9m6s | |
| ghost-monthly | ghost | Monthly:dayOfMonth=1,hour=2,min=20 | true | | 9m6s | |
| ghost-weekly | ghost | Weekly:dayOfWeek=7,hour=2,min=0 | true | | 9m6s | |
+---------------+-------+------------------------------------+---------+-------+------+-------+
After updating our DNS entry with the new EXTERNAL-IP again, we can confirm that we restored to the correct (modified) version of the blog post and the Ghost application is functioning correctly:
Again, in a production environment you would most likely not restore the secrets from the backup (for example, by using exclude filters in the restore command); instead, you would use your external secrets manager.
When you choose to have NetApp Trident protect back up only the PVC of the application, the recovery options are slightly different. Let’s explore them in more detail here. Also keep in mind that with this approach, Trident protect has no means of quiescing the application before creating the snapshots and backups, so the backups will only be crash-consistent.
We start again by “accidentally” deleting the application namespace (ghost-2 now)
$ kubectl delete ns ghost-2
namespace "ghost-2" deleted
In Argo CD, the corresponding application ghost-demo-2 goes into the Missing – OutOfSync state quickly.
As before, we list the available backups of the Trident protect application ghost-dataonly with tridentctl-protect get appvaultcontent and store the path value of the ghost-dataonly-backup-modified backup in the variable BACKUPPATH:
$ tridentctl-protect get appvaultcontent argocdtest --app ghost-dataonly --show-paths
+----------------------+----------------+--------+--------------------------------+---------------------------+---------------------------------------------------------------------------------------------------------------------------------+
| CLUSTER | APP | TYPE | NAME | TIMESTAMP | PATH |
+----------------------+----------------+--------+--------------------------------+---------------------------+---------------------------------------------------------------------------------------------------------------------------------+
| aks-pu-ghost-cluster | ghost-dataonly | backup | daily-fc0c8-20250219010000 | 2025-02-19 01:01:52 (UTC) | ghost-dataonly_9cd10ae4-8c65-4179-8452-f4ac7a1a9862/backups/daily-fc0c8-20250219010000_3960331d-e39a-448f-a020-7a09a57152eb |
| aks-pu-ghost-cluster | ghost-dataonly | backup | ghost-dataonly-backup-modified | 2025-02-18 15:11:21 (UTC) | ghost-dataonly_9cd10ae4-8c65-4179-8452-f4ac7a1a9862/backups/ghost-dataonly-backup-modified_1c5f2f68-b905-48d8-92e2-7b3c206aba6a |
| aks-pu-ghost-cluster | ghost-dataonly | backup | hourly-d91ae-20250219165000 | 2025-02-19 16:51:49 (UTC) | ghost-dataonly_9cd10ae4-8c65-4179-8452-f4ac7a1a9862/backups/hourly-d91ae-20250219165000_61a6968f-cfb4-4aee-a92d-9d981a3213b5 |
+----------------------+----------------+--------+--------------------------------+---------------------------+---------------------------------------------------------------------------------------------------------------------------------+
$ BACKUPPATH=ghost-dataonly_9cd10ae4-8c65-4179-8452-f4ac7a1a9862/backups/ghost-dataonly-backup-modified_1c5f2f68-b905-48d8-92e2-7b3c206aba6a
Now we’re ready to try out possible approaches to restore the application.
The first restore option is again to start with a partial synchronization from Argo CD, followed by a Trident protect BackupRestore of the PVCs. In the Argo CD UI, navigate to the failed application and click Sync. Then deselect the PVCs from the resources to synchronize, and make sure to select the Auto-Create Namespace option to let Argo CD re-create the deleted ghost namespace.
After you click Sync, Argo CD synchronizes the resources.
Now we can start the restore of the PVCs into the ghost-2 namespace using the tridentctl-protect create backuprestore command and follow the restore progress, which will take a few minutes.
$ tridentctl-protect create backuprestore --appvault argocdtest --path $BACKUPPATH --namespace-mapping ghost-2:ghost-2 -n ghost-2
BackupRestore "ghost-dataonly-b7eywr" created.
$ kubectl -n ghost-2 get BackupRestore ghost-dataonly-b7eywr -w
NAME STATE ERROR AGE
ghost-dataonly-b7eywr Running 13s
ghost-dataonly-b7eywr Running 3m5s
ghost-dataonly-b7eywr Running 3m5s
ghost-dataonly-b7eywr Running 3m18s
ghost-dataonly-b7eywr Running 3m19s
ghost-dataonly-b7eywr Running 3m19s
ghost-dataonly-b7eywr Running 3m19s
ghost-dataonly-b7eywr Running 3m19s
ghost-dataonly-b7eywr Running 3m22s
ghost-dataonly-b7eywr Running 3m22s
ghost-dataonly-b7eywr Running 3m22s
ghost-dataonly-b7eywr Running 3m22s
ghost-dataonly-b7eywr Running 3m27s
ghost-dataonly-b7eywr Running 3m27s
ghost-dataonly-b7eywr Running 3m28s
ghost-dataonly-b7eywr Running 3m28s
ghost-dataonly-b7eywr Running 3m28s
ghost-dataonly-b7eywr Running 3m28s
ghost-dataonly-b7eywr Running 3m28s
ghost-dataonly-b7eywr Completed 3m28s
This time, because the backup contains only the PVC, the secrets weren’t restored, and the application pods can’t start yet.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost-2
NAME READY STATUS RESTARTS AGE
pod/ghost-6f847c7678-dszrb 0/1 CreateContainerConfigError 0 12m
pod/ghost-mysql-69546fc5b5-8mkpx 0/1 CreateContainerConfigError 0 12m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.219.88 74.178.195.252 80:30124/TCP 12m
service/ghost-mysql ClusterIP 172.16.120.184 <none> 3306/TCP 12m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 0/1 1 0 12m
deployment.apps/ghost-mysql 0/1 1 0 12m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-6f847c7678 1 1 0 12m
replicaset.apps/ghost-mysql-69546fc5b5 1 1 0 12m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-333c60eb-6752-449b-92a7-aa5a1387a72a 100Gi RWX azure-netapp-files-standard <unset> 6m56s
persistentvolumeclaim/mysql-pv-claim Bound pvc-2b2c9d3c-81bd-41a4-94ea-40dcdd8892c0 100Gi RWO azure-netapp-files-standard <unset> 6m55s
The final step is therefore to re-create the secrets. In real life, this would likely be handled by your external secret manager.
$ kubectl -n ghost-2 apply -f secrets.yaml
secret/ghost-mysql created
secret/ghost created
Now the Ghost pods can start, and the application status in Argo CD will switch to Healthy – Synced.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost-2
NAME READY STATUS RESTARTS AGE
pod/ghost-6f847c7678-dszrb 1/1 Running 0 14m
pod/ghost-mysql-69546fc5b5-8mkpx 1/1 Running 0 14m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.219.88 74.178.195.252 80:30124/TCP 14m
service/ghost-mysql ClusterIP 172.16.120.184 <none> 3306/TCP 14m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 1/1 1 1 14m
deployment.apps/ghost-mysql 1/1 1 1 14m
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-6f847c7678 1 1 1 14m
replicaset.apps/ghost-mysql-69546fc5b5 1 1 1 14m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-333c60eb-6752-449b-92a7-aa5a1387a72a 100Gi RWX azure-netapp-files-standard <unset> 8m52s
persistentvolumeclaim/mysql-pv-claim Bound pvc-2b2c9d3c-81bd-41a4-94ea-40dcdd8892c0 100Gi RWO azure-netapp-files-standard <unset> 8m51s
NAME TYPE DATA AGE
secret/ghost Opaque 1 48s
secret/ghost-mysql Opaque 2 49s
Finally, update the DNS entry for ghost-demo-2.ghost.pu-store.de (or the address you used) with the new external IP and confirm that we restored to the correct (modified) version of the blog post:
The second restore option is to start with a Trident protect restore of the PVCs. This tridentctl-protect create backuprestore command will create the ghost-2 namespace and restore the PVCs into it:
$ tridentctl-protect create backuprestore --appvault argocdtest --path $BACKUPPATH --namespace-mapping ghost-2:ghost-2 -n ghost-2
BackupRestore "ghost-dataonly-ji6oid" created.
$ kubectl -n ghost-2 get BackupRestore ghost-dataonly-ji6oid -w
NAME STATE ERROR AGE
ghost-dataonly-ji6oid Running 12s
ghost-dataonly-ji6oid Running 3m44s
ghost-dataonly-ji6oid Running 3m44s
ghost-dataonly-ji6oid Running 3m54s
ghost-dataonly-ji6oid Running 3m54s
ghost-dataonly-ji6oid Running 3m54s
ghost-dataonly-ji6oid Running 3m54s
ghost-dataonly-ji6oid Running 3m54s
ghost-dataonly-ji6oid Running 3m58s
ghost-dataonly-ji6oid Running 3m58s
ghost-dataonly-ji6oid Running 3m58s
ghost-dataonly-ji6oid Running 3m58s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Running 4m4s
ghost-dataonly-ji6oid Completed 4m4s
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost-2
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-5c664360-9334-4397-9273-649d08a77edf 100Gi RWX azure-netapp-files-standard <unset> 2m53s
persistentvolumeclaim/mysql-pv-claim Bound pvc-4f75acd4-dd33-4abe-b237-c7fb382a166d 100Gi RWO azure-netapp-files-standard <unset> 2m53s
Now we start a partial synchronization from Argo CD. Select Apply Out of Sync Only in the synchronization details and start the synchronization. This will re-create the missing Ghost and Trident protect resources, including protection schedules.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost-2
NAME READY STATUS RESTARTS AGE
pod/ghost-6f847c7678-flg8f 0/1 CreateContainerConfigError 0 3m17s
pod/ghost-mysql-69546fc5b5-4gq59 0/1 CreateContainerConfigError 0 3m17s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.5.181 132.164.25.168 80:31429/TCP 3m17s
service/ghost-mysql ClusterIP 172.16.4.210 <none> 3306/TCP 3m17s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 0/1 1 0 3m17s
deployment.apps/ghost-mysql 0/1 1 0 3m17s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-6f847c7678 1 1 0 3m17s
replicaset.apps/ghost-mysql-69546fc5b5 1 1 0 3m17s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-5c664360-9334-4397-9273-649d08a77edf 100Gi RWX azure-netapp-files-standard <unset> 8m10s
persistentvolumeclaim/mysql-pv-claim Bound pvc-4f75acd4-dd33-4abe-b237-c7fb382a166d 100Gi RWO azure-netapp-files-standard <unset> 8m10s
Again, the Ghost pods can’t start yet because the secrets are still missing, so the final step is to re-create the secrets. In real life, this would likely be handled by your external secret manager.
$ kubectl -n ghost-2 apply -f ~/secrets.yaml
secret/ghost-mysql created
secret/ghost created
Now the Ghost pods come up and Argo CD recognizes the application as Healthy.
$ kubectl get all,pvc,volumesnapshot,secrets -n ghost-2
NAME READY STATUS RESTARTS AGE
pod/ghost-6f847c7678-flg8f 1/1 Running 0 4m27s
pod/ghost-mysql-69546fc5b5-4gq59 1/1 Running 0 4m27s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/ghost LoadBalancer 172.16.5.181 132.164.25.168 80:31429/TCP 4m27s
service/ghost-mysql ClusterIP 172.16.4.210 <none> 3306/TCP 4m27s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/ghost 1/1 1 1 4m27s
deployment.apps/ghost-mysql 1/1 1 1 4m27s
NAME DESIRED CURRENT READY AGE
replicaset.apps/ghost-6f847c7678 1 1 1 4m27s
replicaset.apps/ghost-mysql-69546fc5b5 1 1 1 4m27s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
persistentvolumeclaim/ghost Bound pvc-5c664360-9334-4397-9273-649d08a77edf 100Gi RWX azure-netapp-files-standard <unset> 9m20s
persistentvolumeclaim/mysql-pv-claim Bound pvc-4f75acd4-dd33-4abe-b237-c7fb382a166d 100Gi RWO azure-netapp-files-standard <unset> 9m20s
NAME TYPE DATA AGE
secret/ghost Opaque 1 45s
secret/ghost-mysql Opaque 2 46s
Finally, update the DNS entry for ghost-demo-2.ghost.pu-store.de (or the address you used) with the new external IP and confirm that we restored to the correct (modified) version of the blog post:
Whether a natural disaster hits, or business dictates that you migrate your application to a different geography or cloud provider, application migration with Trident protect and Argo CD is also a simple process. In essence, you can follow the same steps to recover your application from a cluster loss and restore it to a different cluster.
All the Trident protect restore commands shown in the previous sections work the same on a different cluster. Obviously, the target cluster must have Trident protect installed, access to the object storage hosting the backups, and a corresponding AppVault CR configured.
For the Argo CD steps to work, you need to change the destination settings of the Argo CD application when you want to deploy it on a different cluster (and/or namespace).
On the Argo CD application page, click the App Details button, then the Manifest tab, and finally the Edit button.
In the text editor, update the destination.server and/or destination.namespace values, depending on whether you changed the cluster and/or the namespace, respectively. After you make the changes, click Save.
Now you can go through any of the restore steps discussed in the previous sections and recover your application on the disaster recovery cluster.
Whether you’re currently exploring GitOps for its productivity, security, compliance, and reliability benefits, or you’re a seasoned GitOps practitioner, you likely understand that enterprise-grade disaster recovery is vital regardless of the application’s deployment model. You should also understand that it’s not necessary to sacrifice the benefits of GitOps to achieve these disaster recovery requirements.
NetApp Trident protect provides robust application-aware disaster recovery for all types of Kubernetes applications and can easily adhere to GitOps principles by storing application protection policies in the Git repository. In this blog, we took the following actions to achieve GitOps-based application disaster recovery:
Thanks for reading!