Tech ONTAP Blogs

Extending GitOps patterns to application data protection with NetApp Astra Control

MichaelHaigh
NetApp
3,529 Views

 

Many organizations have successfully extended the DevOps operational framework to cover application infrastructure by using Git as the single source of truth. This process has been coined “GitOps” and has a wide array of benefits, including increased productivity, improved security and compliance, increased reliability, and a built-in audit trail.

 

Argo CD is a popular GitOps tool in the market today, is entirely open source, and is currently a CNCF incubating project. Argo CD is extremely easy to set up, has a robust built-in GUI, and is great at abstracting the complexities of Kubernetes. Developers only need to commit code to their Git repository, and Argo CD picks up on those changes and automatically synchronizes them to the relevant infrastructure.

 

Regardless of where your organization is on its DevOps and GitOps journey, your Kubernetes applications require robust application-aware data protection and disaster recovery, just like your traditional applications. NetApp® Astra Control provides application-aware data protection, mobility, and disaster recovery for any workload running on any Kubernetes distribution. It’s available both as a fully managed service (Astra Control Service) and as self-managed software (Astra Control Center). It enables administrators to easily protect, back up, migrate, and create working clones of Kubernetes applications, through either its UI or robust APIs.

 

MichaelHaigh_0-1680035130659.png

 

Through the use of label selectors during app management, it’s simple to have Astra Control protect only Kubernetes volumes rather than an entire namespace. This approach provides a perfect complement for GitOps environments, enabling Astra Control to manage the persistent data while Argo CD manages the application definitions and configurations. However, manually creating Astra Control protection policies for the persistent volumes after deployment, or manually backing up the volumes before application changes, is the antithesis of GitOps. These policies should instead be defined alongside our app definitions in our single source of truth, the Git repository. Thankfully, with Argo CD and actoolkit (a Python package of the open-source NetApp Astra toolkits), defining these policies in your Git repository is a very simple process. To find out how, read on.

 

Prerequisites

 

This blog uses Argo CD resource hooks to automatically manage and protect your GitOps-defined applications. If you’re following along step by step, the following resources are required:

 

  • An Astra Control Center (self-managed) or Astra Control Service (NetApp managed service) instance
  • Your Astra Control account ID, fully qualified domain name, and an API authorization token
  • A supported Kubernetes cluster (we’ll use an Azure Kubernetes Service cluster in this walk-through) managed by Astra Control
  • A workstation with git and kubectl installed, kubectl configured to use the Kubernetes cluster just mentioned, and a GitHub account

 

If you’re just looking for example Argo CD resource hooks that interact with Astra Control, skip ahead to the Repository Contents section.

 

Argo CD deployment 

 

We’ll use the getting started page to deploy Argo CD onto our Kubernetes cluster, but if you already have Argo CD deployed in your environment, skip to the next section. On your workstation CLI, run the following commands.

 

kubectl create namespace argocd 
kubectl apply -n argocd -f https://raw.githubusercontent.com/argoproj/argo-cd/stable/manifests/install.yaml 
kubectl patch svc argocd-server -n argocd -p '{"spec": {"type": "LoadBalancer"}}' 

 

These commands should produce a decent amount of output as they create the argocd namespace, apply the Argo CD Kubernetes manifest, and finally patch the argocd-server service type to be a load balancer for external access.

 

After a few moments, we’re ready to grab our access information for Argo CD. In this blog, we’ll use Argo CD’s GUI to concisely present information and to minimize the number of installation dependencies. However, in production workflows, you might want to use Argo CD's CLI.

 

$ kubectl -n argocd get svc argocd-server 
NAME            TYPE           CLUSTER-IP       EXTERNAL-IP    PORT(S)                      AGE 
argocd-server   LoadBalancer   172.16.166.248   20.88.186.98   80:30927/TCP,443:30077/TCP   70s 
$ kubectl -n argocd get secret argocd-initial-admin-secret -o jsonpath="{.data.password}" | base64 -d; echo 
lXMAleZ95RDhMtpo

 

Copy the external IP value from the output of the first command and paste it into your web browser. Then sign in with the username admin and the password copied from the output of the second command.

 

MichaelHaigh_2-1680035542671.png

 

Now that we have Argo CD installed, it’s time to set up our demo GitHub repository.

 

GitHub repository setup

 

Rather than following the (stateless) example application on the getting started page of the Argo CD documentation, we’ll use a stateful application called Ghost, which is a popular content delivery platform. We’ll make several changes to the Kubernetes YAML through Git commits, so we’ll use a fork to provide write access.

 

In your web browser, navigate to the MichaelHaigh/argocd-astra-dataonly repository on GitHub, and then click the Fork button in the upper right corner.

 

MichaelHaigh_0-1680036294341.png

 

Select your user name as the owner, leave the repository name as argocd-astra-dataonly, optionally leave the description as is, and then click Create Fork.

 

MichaelHaigh_1-1680036294351.png

 

In your workstation CLI, clone the repository (be sure to update the user name) and change into the new directory.

 

git clone https://github.com/<YourUsername>/argocd-astra-dataonly.git
cd argocd-astra-dataonly

 

Now that our Git repository is up and running, let’s investigate the YAML files in the repository.

 

Repository contents

 

In the ghost/ directory of our repository, you’ll find four Kubernetes YAML files. The first two files define the demo Kubernetes Ghost application, and the second two files define Argo CD resource hooks that automate protection policies with Astra Control:

 

  • frontend.yaml: contains the front end of the Ghost application, including:
    • A service of type LoadBalancer
    • A persistent volume claim with an access mode of ReadWriteMany
    • A deployment with two replicas of the ghost-debian container image
  • backend.yaml: contains the back end of the Ghost application, including:
    • A service of type ClusterIP
    • A persistent volume claim with an access mode of ReadWriteOnce
    • A deployment with one replica of the MySQL container image
    • A config map with necessary application data
  • postsync-hook.yaml: a Kubernetes job that defines an Argo CD resource hook and is run a single time after the application initially syncs
  • presync-hook.yaml: a Kubernetes job that defines another resource hook, and runs every time prior to the application synchronizing

 

The front-end and back-end definitions should be quite straightforward, but let’s investigate the Argo CD resource hooks in greater detail. The following code shows all the contents of the PostSync hook for posterity:

 

apiVersion: batch/v1
kind: Job
metadata:
  name: astra-manage-app
  annotations:
    argocd.argoproj.io/hook: PostSync
    argocd.argoproj.io/hook-delete-policy: HookFailed
spec:
  template:
    spec:
      volumes:
        - name: astra-control-config
          secret:
            secretName: astra-control-config
      containers:
      - name: alpine-actoolkit
        image: alpine:latest
        env:
          - name: APPNAME
            value: "ghost-argocd"
          - name: NAMESPACE
            value: "ghost"
          - name: LABEL
            value: "volume=persistent"
          - name: CLUSTERNAME
            value: "aks-eastus-cluster"
          - name: ACTOOLKIT_VERSION
            value: "2.6.2"
        command: ["/bin/sh"]
        args:
        - -c
        - >
          apk add py3-pip jq &&
          python3 -m pip install --upgrade pip &&
          python3 -m pip install actoolkit==$ACTOOLKIT_VERSION &&
          clusterID=$(actoolkit -o json list clusters -f $CLUSTERNAME | jq -r '.items[].id') &&
          for i in `actoolkit -o json list namespaces -u -c $clusterID -f $NAMESPACE | jq -r '.items[].clusterID'`;
          do
            echo actoolkit: managing app $APPNAME in namespace $NAMESPACE on cluster $i;
            actoolkit manage app $APPNAME $NAMESPACE $i -l $LABEL;
            sleep 5;
            for j in `actoolkit -o json list apps -f $APPNAME -c $CLUSTERNAME | jq -r '.items[].id'`;
            do
              echo actoolkit: creating protection policy for $APPNAME / $j;
              actoolkit create protection $j -g hourly  -m 0      -b 1 -s 1;
              actoolkit create protection $j -g daily   -H 0      -b 2 -s 2;
              actoolkit create protection $j -g weekly  -H 0 -W 1 -b 2 -s 2;
              actoolkit create protection $j -g monthly -H 0 -M 1 -b 2 -s 2;
            done
          done
        volumeMounts:
          - mountPath: /etc/astra-toolkits
            name: astra-control-config
            readOnly: true
      restartPolicy: Never
  backoffLimit: 1

 

Let’s dive into this hook in greater detail. The beginning of the file defines the kind and metadata:

 

kind: Job
metadata:
  name: astra-manage-app
  annotations:
    argocd.argoproj.io/hook: PostSync
    argocd.argoproj.io/hook-delete-policy: HookFailed

 

Argo CD resource hooks can be any type of Kubernetes resource; however they typically are either pods, jobs, or Argo Workflows. Both our hooks are Kubernetes jobs. The metadata section has three key components:

 

  • A name is defined, rather than generateName, which in conjunction with the delete policy means the hook will run only once if successful.
  • It’s a PostSync hook, meaning that it runs after the application completes a sync, and all resources are in a healthy state.
  • The delete policy is HookFailed, which means that the hook is deleted only if a failure occurs, and in conjunction with the name definition, means that it will run only once.

 

At the beginning and the end of the spec definition, there are volume and volume mount definitions:

 

    spec:
      volumes:
        - name: astra-control-config
          secret:
            secretName: astra-control-config
      containers:
      …
        volumeMounts:
          - mountPath: /etc/astra-toolkits
            name: astra-control-config
            readOnly: true

 

This code references a Kubernetes secret that we’ll create momentarily, which contains the account ID, API token, and name/FQDN of your Astra Control instance mentioned in the prerequisites section. It mounts the secret to /etc/astra-toolkits, which is one of the directories that the Astra Control toolkit reads from.

 

Next, we have the first half of the main container definition:

 

      containers:
      - name: alpine-actoolkit
        image: alpine:latest
        env:
          - name: APPNAME
            value: "ghost-argocd"
          - name: NAMESPACE
            value: "ghost"
          - name: LABEL
            value: "volume=persistent"
          - name: CLUSTERNAME
            value: "aks-eastus-cluster"
          - name: ACTOOLKIT_VERSION
            value: "2.6.2"

 

This Kubernetes job uses alpine, a lightweight Linux distribution, with the following environment variables configured:

 

  • APPNAME: defines the logical Astra Control application name
  • NAMESPACE: the Kubernetes namespace that contains the application
  • LABEL: the Kubernetes label selector to target only the persistent volumes rather than the entire application
  • CLUSTERNAME: the Kubernetes cluster name that contains the application; be sure to update this value to the name of your Kubernetes cluster in both the PreSync and PostSync hooks
  • ACTOOLKIT_VERSION: the version of actoolkit to install

 

        command: ["/bin/sh"]
        args:
        - -c
        - >
          apk add py3-pip jq &&
          python3 -m pip install --upgrade pip &&
          python3 -m pip install actoolkit==$ACTOOLKIT_VERSION &&
          clusterID=$(actoolkit -o json list clusters -f $CLUSTERNAME | jq -r '.items[].id') &&
          for i in `actoolkit -o json list namespaces -u -c $clusterID -f $NAMESPACE | jq -r '.items[].clusterID'`;
          do
            echo actoolkit: managing app $APPNAME in namespace $NAMESPACE on cluster $i;
            actoolkit manage app $APPNAME $NAMESPACE $i -l $LABEL;
            sleep 5;
            for j in `actoolkit -o json list apps -f $APPNAME -c $CLUSTERNAME | jq -r '.items[].id'`;
            do
              echo actoolkit: creating protection policy for $APPNAME / $j;
              actoolkit create protection $j -g hourly  -m 0      -b 1 -s 1;
              actoolkit create protection $j -g daily   -H 0      -b 2 -s 2;
              actoolkit create protection $j -g weekly  -H 0 -W 1 -b 2 -s 2;
              actoolkit create protection $j -g monthly -H 0 -M 1 -b 2 -s 2;
            done
          done

 

This shell command carries out the following actions:

 

  • Installs python pip and jq
  • Ensures that pip is at the latest version
  • Installs actoolkit, which is a Python package of the Astra Control toolkit
  • Gathers the UUID of the cluster defined by the variable $CLUSTERNAME
  • Instantiates a for loop based on the UUID gathered from a list namespaces command with actoolkit; the command has filters to only output currently unmanaged namespaces within the cluster defined by $CLUSTERNAME and with a name of $NAMESPACE
  • Prints the app name, namespace name, and cluster ID that are about to be managed to standard out for easy confirmation when you view container logs
  • Runs a manage app command with actoolkit, restricting the app definition to the label defined by $LABEL
  • Sleeps for 5 seconds
  • Instantiates a for loop based on the UUID gathered from a list apps command that has a filter to only output apps with the name $APPNAME within the cluster of $CLUSTERNAME
  • Prints a notification about the protection policy being created
  • Runs four create protection commands with actoolkit to create an hourly, daily, weekly, and monthly protection policy

 

In summary, our PostSync hook runs only once after the initial application sync and uses the Astra Control toolkit to manage and create a protection policy for the freshly deployed application.

 

Next up we have our PreSync resource hook:

 

apiVersion: batch/v1
kind: Job
metadata:
  generateName: presync-astra-backup-
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: BeforeHookCreation
spec:
  template:
    spec:
      volumes:
        - name: astra-control-config
          secret:
            secretName: astra-control-config
      containers:
      - name: alpine-actoolkit
        image: alpine:latest
        env:
          - name: APPNAME
            value: "ghost-argocd"
          - name: CLUSTERNAME
            value: "aks-eastus-cluster"
          - name: ACTOOLKIT_VERSION
            value: "2.6.2"
        command: ["/bin/sh"]
        args:
        - -c
        - >
          apk add py3-pip jq &&
          python3 -m pip install --upgrade pip &&
          python3 -m pip install actoolkit==$ACTOOLKIT_VERSION &&
          for i in `actoolkit -o json list apps -f $APPNAME -c $CLUSTERNAME | jq -r '.items[].id'`;
          do
            echo actoolkit: backing up app $APPNAME / $i;
            actoolkit create backup $i argo-presync-`date "+%Y%m%d%H%M%S"`;
          done
        volumeMounts:
          - mountPath: /etc/astra-toolkits
            name: astra-control-config
            readOnly: true
      restartPolicy: Never
  backoffLimit: 1

 

The PreSync hook is very similar to the PostSync hook, with a couple of key differences. First, the metadata section:

 

metadata:
  generateName: presync-astra-backup-
  annotations:
    argocd.argoproj.io/hook: PreSync
    argocd.argoproj.io/hook-delete-policy: BeforeHookCreation

 

  • A generateName field is used, which means the hook will run each time the application syncs.
  • It’s a PreSync hook, which means the hook will run before any other action during an application sync.
  • The delete policy is BeforeHookCreation, which means the previous hook will be deleted just before the next hook is run, and it will stay present in the application history until the next sync.

 

The command section is also very similar, but there are a couple of key differences:

 

          for i in `actoolkit -o json list apps -f $APPNAME -c $CLUSTERNAME | jq -r '.items[].id'`;
          do
            echo actoolkit: backing up app $APPNAME / $i;
            actoolkit create backup $i argo-presync-`date "+%Y%m%d%H%M%S"`;
          done

 

  • There is only a single for loop. It iterates over the app UUID returned from a list apps command filtered for the name $APPNAME within the cluster $CLUSTERNAME.
  • A backup is created with a name that starts with argo-presync- and then ends with a timestamp.

 

In summary, our PreSync hook runs before each application sync and backs up the application. In conjunction with the PostSync hook, our application will be automatically managed by Astra Control, and backed up before every change, enabling easy application restores if a destructive change occurs.

 

Domain modification

 

For application access, this Ghost application uses a domain name in the front-end deployment environment variable section. If you’re following along step by step, choose one of the following options:

 

  1. Substitute the current astrademo.net domain in the frontend.yaml file with a domain (or subdomain) that you own, and commit those changes to your Git repository.
  2. Leave the astrademo.net domain as is, but update your computer’s hosts file with an entry mapping the ghost.astrademo.net host to the front-end load balancer IP after deployment.

 

Both of these options are highly dependent on several factors (DNS provider and host OS, respectively), so these steps are left up to you. If you’re using option 1, go ahead and make the domain name update in the frontend.yaml file (line 98) and commit those changes to your forked Git repository now.

 

Secret creation

 

Argo CD is unopinionated on secret management, enabling administrators to use the secret manager of their choice through a wide range of integrations. If you’re using Argo CD in production, we highly recommend using one of the supported secret management tools.

 

This demo is focused on resource hooks and automatic application data protection, so we’re going to sidestep requiring setup and configuration of a secret manager. However, it’s a bad practice to put secrets into a Git repository (production or not), so we’ll manually define our secrets outside Argo CD and apply them through kubectl.

 

We’ll first create our NetApp Astra Control API config file. Run the following commands, but be sure to substitute your Astra Control account ID, API authorization token, and project name (astra.netapp.io if running Astra Control Service, otherwise your Astra Control Center fully qualified domain name). If you’re not sure of these values, you can find additional information in the readme of the Astra Control toolkits page on GitHub.

 

API_TOKEN=NL1bSP5712pFCUvoBUOi2JX4xUKVVtHpW6fJMo0bRa8=
ACCOUNT_ID=12345678-abcd-4efg-1234-567890abcdef
ASTRA_PROJECT=astra.netapp.io
cat <<EOF > config.yaml
headers:
  Authorization: Bearer $API_TOKEN
uid: $ACCOUNT_ID
astra_project: $ASTRA_PROJECT
verifySSL: True
EOF

 

If done correctly, your config.yaml file should look like this:

 

$ cat config.yaml 
headers:
  Authorization: Bearer NL1bSP5712pFCUvoBUOi2JX4xUKVVtHpW6fJMo0bRa8=
uid: 12345678-abcd-4efg-1234-567890abcdef
astra_project: astra.netapp.io
verifySSL: True

 

Next, we’re going to create our application passwords, which can be any value you desire. Take particular note of the Ghost password value, because you’ll need it in a later step.

 

MYSQL_RPASSWORD=$(echo -n "ChangeToAnythingYouWant" | base64)
MYSQL_PASSWORD=$(echo -n "ChangeToAnythingYouWant2" | base64)
GHOST_PASSWORD=$(echo -n "ChangeToAnythingYouWant3" | base64)
cat <<EOF >secrets.yaml
apiVersion: v1
kind: Secret
metadata:
  name: ghost-mysql
  labels:
    name: mysql
    instance: ghost
type: Opaque
data:
  mysql-root-password: $MYSQL_RPASSWORD
  mysql-password: $MYSQL_PASSWORD
---
apiVersion: v1
kind: Secret
metadata:
  name: ghost
  labels:
    name: ghost
    instance: ghost
type: Opaque
data:
  ghost-password: $GHOST_PASSWORD
EOF

 

Finally, we’ll create our Kubernetes namespace and apply the two secret files we just created.

 

kubectl create namespace ghost
kubectl -n ghost create secret generic astra-control-config --from-file=config.yaml
kubectl -n ghost apply -f secrets.yaml

 

Ensure that you receive responses about the namespace and the three secrets being created, and then move on to the next section, where we define our Argo CD application.

 

Argo CD application creation

 

Now that we have Argo CD, the GitHub repository, and secrets created, we’re ready to deploy our demo Ghost application. Head back to your browser and click the Create Application button in the middle of the Argo CD UI.

 

MichaelHaigh_0-1680095949711.png

 

In the wizard panel that appears, click the Edit as YAML button in the upper right, which will allow us to easily paste in our application definition.

 

MichaelHaigh_1-1680095949721.png

 

Copy the following application definition and paste it into the browser text field.

 

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: ghost-demo
spec:
  destination:
    name: ''
    namespace: ghost
    server: 'https://kubernetes.default.svc'
  source:
    path: ghost
    repoURL: 'https://github.com/<YourUsername>/argocd-astra-dataonly'
    targetRevision: HEAD
  project: default
  syncPolicy:
    automated:
      prune: true
      selfHeal: false

 

Edit the repoURL to point at the GitHub repository you created earlier, and then click Save in the upper-right corner.

 

MichaelHaigh_2-1680096038341.png

 

We can now verify that the fields in the application definition have been filled out by the YAML file, including our general information (name, project, sync options), the application source information (repository URL, branch information, and folder/path information), and destination information (the same Kubernetes cluster that’s running Argo CD). After verification, click Create at the top.

 

MichaelHaigh_3-1680096038348.png

 

Argo CD now has a ghost-demo tile on the main application page. Click the tile to view the application in detail.

 

MichaelHaigh_4-1680096038354.png

 

We should see our application in a syncing state. Our PreSync hook will be the first thing that is run, however, the for loop in the PreSync hook is looking only for managed applications, and because our app isn’t managed yet, this hook won’t take any real action this first time.

 

After a few more moments, the PreSync hook will “complete,” and the rest of the application will begin deployment. The status of most objects should turn green, with the Ghost and MySQL pods taking the longest.

 

MichaelHaigh_5-1680096038363.png

 

After about 5 minutes, all the Ghost Kubernetes resources should be in a healthy state, and the astra-manage-app PostSync hook should appear. Click the pod tile that’s associated with the astra-manage-app job to expand the information.

 

MichaelHaigh_6-1680096038372.png

 

In the pod summary panel that appears, click the Logs tab.

 

MichaelHaigh_7-1680096038376.png

 

Click the Follow button to “tail” the pod logs and wrap lines so we don’t need to scroll to the right.

 

MichaelHaigh_8-1680096038380.png

 

Scroll down to the bottom of the log output. After a few moments, you should see output from the PostSync script stating that the application is being managed, and the protection policies are being created.

 

MichaelHaigh_9-1680096038398.png

 

Close out the pod detail panel, and you should see that the application is in a healthy, synced, and sync ok state. At this point, there are no further actions for Argo CD to take, other than monitoring the Git repository for any future changes.

 

MichaelHaigh_10-1680096038407.png

 

Finally, head over to your Astra Control UI, and verify that our application has been managed. Also note the definition column with the label selector.

 

MichaelHaigh_11-1680096038410.png

 

Click the ghost-argocd application link, and then select the Resources tab. Due to the label selector defined in our PostSync resource hook, we can see that the two volumes are the only resources protected by Astra Control.

 

MichaelHaigh_12-1680096038417.png

 

Now that our application has been deployed by Argo CD and managed by Astra Control, let’s proceed with the application configuration.

 

Application configuration

 

Later in this blog, we’ll restore (and clone) our application after human error accidentally deletes the application. To make it obvious that Astra Control has successfully restored our application rather than simply redeploying it, let’s modify our website.

 

First, gather the external IP of the Ghost load balancer service by using kubectl.

 

$ kubectl -n ghost get svc
NAME          TYPE           CLUSTER-IP      EXTERNAL-IP     PORT(S)        AGE
ghost         LoadBalancer   172.16.20.135   20.232.250.23   80:31444/TCP   141m
ghost-mysql   ClusterIP      172.16.16.122   <none>          3306/TCP       141m

 

Depending on the DNS option you used earlier, either update the external IP value (20.232.250.23 here, but yours will be different) to your own domain, or add an entry to ghost.astrademo.net in your local hosts file.

 

When the DNS change is complete, navigate to the site, where you should be presented with a default User’s Blog site.

 

MichaelHaigh_13-1680096502730.png

 

Next, we’ll need to log in with the administrator credentials we defined earlier, which we can do by appending /ghost to our URL.

 

The email address should be user@example.com (set by the GHOST_EMAIL environment variable in the frontend.yaml file), and the password is the GHOST_PASSWORD value set earlier in the secret creation section. Click Sign In.

 

MichaelHaigh_14-1680096502739.png

 

If the login is successful, you should be presented with your site dashboard. Let’s edit the default “Coming soon” post by clicking Posts in the left column, and then Coming Soon.

 

MichaelHaigh_15-1680096502746.png

 

Change the “Coming soon” title to something more obvious—for example, “Argo CD – Astra Control Demo”—and then click Update.

 

MichaelHaigh_16-1680096502758.png

 

Navigate back to the base URL and verify that the previous “Coming soon” post has been updated with our changes.

 

MichaelHaigh_17-1680096502798.png

 

Now that our website has been updated with an obvious change, let test our Argo CD PreSync hook.

 

Git modification and PreSync hook backup

 

Imagine that our site is gaining in popularity, and we determine that it’s necessary to change our front-end replicas from 2 to 3 to meet the increased demand. Open ghost/frontend.yaml with your favorite text editor, and modify the deployment to change the replicas from 2 to 3. When you’ve finished, run the following two commands and ensure that your output matches.

 

$ git status
On branch main
Your branch is up to date with 'origin/main'.

Changes not staged for commit:
  (use "git add <file>..." to update what will be committed)
  (use "git restore <file>..." to discard changes in working directory)
	modified:   ghost/frontend.yaml

no changes added to commit (use "git add" and/or "git commit -a")
$ git diff
diff --git a/ghost/frontend.yaml b/ghost/frontend.yaml
index e2f738e..0e2b402 100644
--- a/ghost/frontend.yaml
+++ b/ghost/frontend.yaml
@@ -48,7 +48,7 @@ spec:
     matchLabels:
       name: ghost
       instance: ghost
-  replicas: 2
+  replicas: 3
   strategy:
     type: RollingUpdate
   template:

 

Be sure to leave any secrets as untracked files, because we don’t want to commit them to our Git repository. Run the next three commands to push our changes to our Git repository.

 

git add ghost/frontend.yaml 
git commit -m 'updating replicas from 2 to 3'
git push

 

In your browser window, monitor the Argo CD application page for any changes. By default, Argo CD polls the Git repository every 3 minutes, so that’s the maximum amount of time you’ll need to wait. You should eventually see a new presync-astra-backup job created with its underlying pod. Click the pod tile to open the detailed panel.

 

MichaelHaigh_18-1680096780601.png

 

Click the Logs tab and scroll to the bottom. You should eventually see some output indicating that a backup of our Ghost application is starting, and then confirmation that the backup is complete.

 

MichaelHaigh_19-1680096780624.png

 

Close the pod detail panel to view the main application page. You should see that a third pod replica has just been created.

 

MichaelHaigh_20-1680096780634.png

 

In NetApp Astra Control, we can select our ghost-argocd application, select the Data Protection and Backups tabs, and then view that our argo-presync-timestamp backup is in a healthy state.

 

MichaelHaigh_21-1680096780641.png

 

Our Kubernetes application has now been successfully deployed, managed, and protected, all through GitOps principles. Let’s continue to see how restoration and cloning work if an unplanned disaster or planned migration occurs.

 

Application restoration

 

It’s unfortunately a too-common occurrence for Kubernetes administrators to accidentally delete a namespace, often through a mix-up in contexts (for example, see these articles on Stack Overflow, Reddit, and Medium). With stateless applications and GitOps principles, it’s a simple process to have your applications restored, potentially in a fully automated fashion. In the past, stateful applications have been much more complicated (or impossible) to restore; however, Astra Control makes stateful restoration almost as simple as stateless restoration. Let’s start by “accidentally” deleting our namespace:

 

$ kubectl delete namespace ghost
namespace "ghost" deleted

 

It might take a couple of minutes for the prompt to be returned, as every Kubernetes resource in the namespace is first deleted. Head over into the Argo CD UI, and we’ll see the application in a missing and out of sync state.

 

MichaelHaigh_22-1680097043215.png

 

Switch back to your Astra Control application page, and note the banner stating that the application is unavailable. Then click the Actions button, and then Restore.

 

MichaelHaigh_23-1680097043223.png

 

In the wizard that appears, leave Restore to Original Namespace selected, choose the Backups tab, and then select the argo-presync-timestamp backup from earlier. Click Next.

 

MichaelHaigh_24-1680097043231.png

 

Verify that the summary is correct, type restore into the text box, and click the Restore button.

 

MichaelHaigh_25-1680097043237.png

 

It will take Astra Control several minutes to restore the application. While we’re waiting, let’s manually restore our secrets. (In a production environment, you would use your external secret manager instead.)

 

kubectl -n ghost create secret generic astra-control-config --from-file=config.yaml
kubectl -n ghost apply -f secrets.yaml

 

After Astra Control has finished restoring our application, the final step is to initiate a sync in Argo CD. Click the Sync button on the application page, and in the pop-up on the right panel, leave the defaults and click Synchronize.

 

MichaelHaigh_26-1680097176719.png

 

The first step in the sync process is our PreSync hook, which will create a new backup of the application. This is a valid backup that can be used at a later date because Astra Control is managing only the persistent volumes, which have already been restored.

 

When the backup is complete, the remainder of the application (all but the persistent volumes) will be restored. The final step in the Argo CD sync is our PostSync resource hook, which runs again because it was deleted along with the namespace. However, no real action is carried out, because the for loop in the command looks only for currently unmanaged namespaces (and the ghost namespace is already managed).

 

MichaelHaigh_27-1680097176734.png

 

When the application is healthy, the last step is to get the new IP address of the Ghost load balancer, and then update your DNS (or local hosts file) accordingly.

 

$ kubectl -n ghost get svc
NAME          TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
ghost         LoadBalancer   172.16.58.166    20.241.146.238   80:30587/TCP   12m
ghost-mysql   ClusterIP      172.16.224.154   <none>           3306/TCP       12m

 

After DNS has been updated, navigate to your domain of choice, and verify that the post title is the updated “Argo CD – Astra Control Demo” version.

 

MichaelHaigh_28-1680097176773.png

 

In summary, to easily recover from the deletion of our namespace and restore our application, we performed the following steps:

 

  1. Restore the persistent volumes through Astra Control.
  2. Reapply the secrets. (In production, use a secret manager.)
  3. Synchronize the app through Argo CD.
  4. Update DNS.

 

Application migration

 

Whether an unplanned natural disaster hits, or business dictates that we migrate our application to a different geography or cloud provider, application migration with Astra Control and Argo CD is also a simple process. In this example, we’ll migrate our application to a new namespace. However, the workflow is exactly the same for cloning to a new cluster.

 

Note: Depending on your Argo CD application settings (particularly automatic pruning and automatic self-healing), the following workflow might delete the source application. Be sure to thoroughly test all disaster recovery workflows with your organization’s Argo CD application settings before moving to production.

 

Open the application page of Astra Control, click the Actions dropdown, and then click Clone.

 

MichaelHaigh_29-1680097558772.png

 

In the wizard that appears, enter the new application name (ghost-newns-argocd), the destination cluster (aks-eastus-cluster, which is the same as the source, but feel free to use a different destination cluster), and the destination namespace (ghost-newns). All of these values can be anything of your choosing, but take note of them all, because you’ll need the values in upcoming steps. Click Next to advance the wizard.

 

MichaelHaigh_30-1680097558780.png

 

Optionally, enable the Clone From an Existing Snapshot or Backup checkbox, if desired. We’ll leave it unselected, which clones the active application. Click Next.

 

MichaelHaigh_31-1680097558783.png

 

Review the clone information, and if it’s correct, click Clone.

 

MichaelHaigh_32-1680097558787.png

 

It will take Astra Control several minutes to clone the persistent volumes. While waiting, apply the secrets (in a production environment, use a secret manager) to the new namespace (ghost-newns in this example, but this depends on your choice in step 1 of the wizard, shown earlier). If you performed a cross-cluster clone, be sure to change your Kubernetes context before running these commands.

 

kubectl -n ghost-newns create secret generic astra-control-config --from-file=config.yaml
kubectl -n ghost-newns apply -f secrets.yaml

 

After the secrets have been applied, we must update both Argo CD resource hooks; otherwise, future Git changes will modify the old application, rather than the new application. In the ghost/presync-hook.yaml file, note the env section:

 

        env:
          - name: APPNAME
            value: "ghost-argocd"
          - name: CLUSTERNAME
            value: "aks-eastus-cluster"
          - name: ACTOOLKIT_VERSION
            value: "2.6.2"

 

Update the APPNAME and CLUSTERNAME values to the selections you made in step 1 of the wizard. In our case, these values are ghost-newns-argocd and aks-eastus-cluster, respectively.

 

For the ghost/postsync-hook.yaml file, we have a similar env section:

 

        env:
          - name: APPNAME
            value: "ghost-argocd"
          - name: NAMESPACE
            value: "ghost"
          - name: LABEL
            value: "volume=persistent"
          - name: CLUSTERNAME
            value: "aks-eastus-cluster"
          - name: ACTOOLKIT_VERSION
            value: "2.6.2"

 

Again, update the APPNAME, NAMESPACE, and CLUSTERNAME values based on step 1 of the wizard. In our case, these values are ghost-newns-argocd, ghost-newns, and aks-eastus-cluster, respectively.

 

When you’ve updated the hooks, commit and push the changes to your Git repository with the following commands.

 

git add ghost/
git commit -m 'updating hooks to new app'
git push

 

Last, on the Argo CD application page, click the App Details button, then the Manifest tab, and finally the Edit button.

 

MichaelHaigh_33-1680097841765.png

 

In the text editor, update the destination.server and/or destination.namespace values, depending on whether you changed the cluster and/or the namespace, respectively. After you make the changes, click Save.

 

MichaelHaigh_34-1680097841772.png

 

Depending on how quickly you updated the application manifest in Argo CD, you might have to wait for a PreSync hook to complete before the application’s state is switched over to the new namespace and/or cluster. After several minutes, use the CLI to verify that everything was re-created as expected.

 

$ kubectl -n ghost-newns get all,pvc
NAME                                                        READY   STATUS      AGE
pod/astra-manage-app-8665f                                  0/1     Completed   8m10s
pod/ghost-d9ccd4c86-gnffp                                   1/1     Running     9m35s
pod/ghost-d9ccd4c86-pdxlr                                   1/1     Running     9m35s
pod/ghost-d9ccd4c86-sgvxx                                   1/1     Running     9m35s
pod/ghost-mysql-944fbf6c9-gzljc                             1/1     Running     9m35s
pod/presync-astra-backup-7c83799-presync-1678731419-mffwv   0/1     Completed   14m

NAME                  TYPE           CLUSTER-IP       EXTERNAL-IP      PORT(S)        AGE
service/ghost         LoadBalancer   172.16.189.203   20.246.134.157   80:30890/TCP   9m36s
service/ghost-mysql   ClusterIP      172.16.146.131   <none>           3306/TCP       9m36s

NAME                          READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/ghost         3/3     3            3           9m35s
deployment.apps/ghost-mysql   1/1     1            1           9m35s

NAME                                    DESIRED   CURRENT   READY   AGE
replicaset.apps/ghost-d9ccd4c86         3         3         3       9m35s
replicaset.apps/ghost-mysql-944fbf6c9   1         1         1       9m35s

NAME                                                        COMPLETIONS   DURATION   AGE
job.batch/astra-manage-app                                  1/1           36s        8m10s
job.batch/presync-astra-backup-7c83799-presync-1678731419   1/1           5m14s      14m

NAME                                   STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS               AGE
persistentvolumeclaim/ghost            Bound    pvc-df0a2543-dc2d-43ff-9de9-e5fd8bec7030   100Gi      RWX            netapp-anf-perf-standard   36m
persistentvolumeclaim/mysql-pv-claim   Bound    pvc-91a95825-a3f4-4392-8659-b972efd625c8   100Gi      RWO            netapp-anf-perf-standard   36m

 

Be sure to also update your DNS (or local hosts file) to the new external IP of the Ghost service, and then verify that your new application is running as expected in your web browser.

 

MichaelHaigh_35-1680097949570.png

 

In summary, whether you need to migrate your application due to a planned or unplanned event, it’s an easy process by following these steps:

 

  1. Clone the application’s persistent volumes through Astra Control to a new namespace and/or cluster, depending on business need.
  2. Apply the secrets to the new namespace and/or cluster. (In production, use a secret manager.)
  3. Update the app name, namespace, and/or cluster name in the resource hooks, and push the changes to Git.
  4. Update the namespace and/or cluster in the Argo CD app manifest.
  5. Update DNS.

 

Conclusion

 

Whether you’re currently exploring GitOps for its productivity, security, compliance, and reliability benefits, or you’re a seasoned GitOps practitioner, you likely understand that enterprise-grade disaster recovery is vital regardless of the application’s deployment model. You should also understand that it’s not necessary to sacrifice the benefits of GitOps to achieve these disaster recovery requirements.

 

NetApp Astra Control provides robust application-aware disaster recovery for all types of Kubernetes applications and can easily adhere to GitOps principles by storing application protection policies in the Git repository. In this blog, we took the following actions to achieve GitOps-based application disaster recovery:

 

  • Deployed Argo CD by its getting started page
  • Cloned the MichaelHaigh/argocd-astra-dataonly repository
  • Covered the contents of the application YAML and the Argo CD resource hooks
  • Created our secrets outside Argo CD (never put secrets in a Git repository!)
  • Deployed our Ghost application with Argo CD
  • Configured our application
  • Accidentally deleted our Kubernetes namespace
  • Restored our Ghost application from an automated PreSync hook backup and our Git application definitions
  • Validated that our restored application was fully functional
  • Migrated our Ghost application to a new namespace (and/or a new cluster)
  • Validated that our migrated application was fully functional

 

If you’re looking to use Argo CD resource hooks with Astra Control for a unique use case, the most critical component to understand are the PreSync and PostSync resource hooks—in particular, the for loops in the args section of the container spec. If you’re looking for more information on how to construct these commands, see the toolkit documentation, specifically the json section in the optional arguments page.

 

Thanks for reading!

 

Public