Tech ONTAP Blogs

Automating registry failover for disaster recovery with Astra Control post-restore hooks

PatricU
NetApp
2,401 Views

 

 

By Michael Haigh (@MichaelHaigh) and Patric Uebele, Technical Marketing Engineers at NetApp

 

Introduction

Disaster recovery for business-critical Kubernetes applications often requires using replicated private registries to pull the container images locally on the DR site in case of a complete failure (including the registry) of the primary site. This can be the case for both on-premises and cloud-based Kubernetes deployments. Therefore, it’s essential for the backup system used to protect these critical Kubernetes applications to have the ability to modify Kubernetes configurations after a restore. That’s also important for other aspects that might need to be changed on the DR site, like ingress configuration.

 

NetApp® Astra™ Control provides application-aware data protection, mobility, and disaster recovery for any workload running on any Kubernetes distribution. It’s available both as a fully managed service (Astra Control Service; ACS) and as self-managed software (Astra Control Center; ACC). Astra Control enables administrators to easily protect, back up, migrate, and create working clones of Kubernetes applications, through either its UI or robust APIs.

 

Astra Control offers various types of execution hooks—custom scripts that you can configure to run in conjunction with a data protection operation of a managed app. With a post-restore hook, you can for example change the container image URL after an application restore to a DR site. Read on to find out how.

 

Setup

In this blog, we use the post-restore URL rewrite hook example with Amazon Elastic Container Registry (ECR) cross-region replication (CRR) to demonstrate how to restore an NGINX sample application. The sample application was originally running on an Amazon Elastic Kubernetes Service (EKS) cluster in the eu-west-1 region to a DR cluster in the eu-north-1 region. The NGINX container image is pulled from private image repositories in the respective regions.

 

Note: Although we use ECR in this blog post, the overall process should be the same regardless of your private container registry of choice, in a cloud or on premises.

 

Enable CRR for ECR registry to DR site

After creating a private registry in Amazon Web Services (AWS), we follow the steps in the AWS documentation to configure private image replication from eu-west-1 to the eu-north-1 region:

 

 

~# aws ecr describe-registry –-region eu-west-1
{
    "registryId": "467886448844",
    "replicationConfiguration": {
        "rules": [
            {
                "destinations": [
                    {
                        "region": "eu-north-1",
                        "registryId": "467886448844"
                    }
                ]
            }
        ]
    }
}

 

 

Now all content pushed to repositories in eu-west-1 is automatically replicated to eu-north-1. Amazon ECR keeps the destination and source synchronized.

Prepare repository and replication

First, we create a private Amazon ECR repository nginx to store the NGINX container image in the eu-west-1 region in the AWS console:

PatricU_17-1689945761508.png

Figure 1:  Amazon ECR repository for nginx.

We take note of the push command for the repository:

PatricU_18-1689945761518.png

Figure 2: Push commands for the nginx repository.

 

We have already pulled the nginx image to our local repository:

 

 

~# docker pull nginx:latest
latest: Pulling from library/nginx
3ae0c06b4d3a: Pull complete
efe5035ea617: Pull complete
a9b1bd25c37b: Pull complete
f853dda6947e: Pull complete
38f44e054f7b: Pull complete
ed88a19ddb46: Pull complete
495e6abbed48: Pull complete
Digest: sha256:08bc36ad52474e528cc1ea3426b5e3f4bad8a130318e3140d6cfe29c8892c7ef
Status: Downloaded newer image for nginx:latest
docker.io/library/nginx:latest

 

 

So we can push it to the Amazon ECR repository:

 

 

~# aws ecr get-login-password --region eu-west-1 | docker login --username AWS --password-stdin 467886448844.dkr.ecr.eu-west-1.amazonaws.com

Login Succeeded
~# docker tag nginx:latest 467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx:latest

~# docker push 467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx:latest
The push refers to repository [467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx]
9e96226c58e7: Pushed
12a568acc014: Pushed
7757099e19d2: Pushed
bf8b62fb2f13: Pushed
4ca29ffc4a01: Pushed
a83110139647: Pushed
ac4d164fef90: Pushed
latest: digest: sha256:d2b2f2980e9ccc570e5726b56b54580f23a018b7b7314c9eaff7e5e479c78657 size: 1778

 

 

Using the AWS CLI, we can find the repositoryUri of the repository in the eu-west-1 region:

 

 

~# aws ecr describe-repositories –-region eu-west-1
{
    "repositories": [
        {
            "repositoryArn": "arn:aws:ecr:eu-west-1:467886448844:repository/nginx",
            "registryId": "467886448844",
            "repositoryName": "nginx",
            "repositoryUri": "467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx",
            "createdAt": "2023-06-20T11:37:12+00:00",
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        }
    ]
}

 

 

 (URI: 467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx) 

Amazon ECR automatically created the nginx repository on the DR site due to the configured replication:

 

 

~# aws ecr describe-repositories --region eu-north-1
{
    "repositories": [
        {
            "repositoryArn": "arn:aws:ecr:eu-north-1:467886448844:repository/nginx",
            "registryId": "467886448844",
            "repositoryName": "nginx",
            "repositoryUri": "467886448844.dkr.ecr.eu-north-1.amazonaws.com/nginx",
            "createdAt": "2023-06-20T14:09:02+02:00",
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        }
    ]
}

 

 

And then automatically replicated the nginx image to the DR site eu-north-1:

 

 

~# aws ecr list-images --repository-name nginx --region eu-north-1
{
    "imageIds": [
        {
            "imageDigest": "sha256:d2b2f2980e9ccc570e5726b56b54580f23a018b7b7314c9eaff7e5e479c78657",
            "imageTag": "latest"
        }
    ]
}

 

 

Note that the repository URI on the DR site is different from the URI on the primary site: 467886448844.dkr.ecr.eu-north-1.amazonaws.com/nginx:latest.

 

Therefore, in the event of a disaster at the primary site, we must make sure that the container images are pulled from the DR site’s repository. Otherwise, the applications will not start.

Deploy demo application

Now we can deploy the demo application on the EKS cluster on the primary site using the following manifest, which installs an NGINX deployment and a PV backed by AWS Elastic Block Storage into the namespace demo. The NGINX container image is pulled from the Amazon ECR repository in the eu-west-1 region:

 

 

~# cat sample-app.yaml
apiVersion: v1
kind: Namespace
metadata:
  name: demo
---
apiVersion: v1
kind: Service
metadata:
  name: demo-service
  namespace: demo
  labels:
    app: demo
spec:
  ports:
    - port: 80
  selector:
    app: demo
    tier: frontend
  type: LoadBalancer
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: demo-deployment
  namespace: demo
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
      tier: frontend
  template:
    metadata:
      labels:
        app: demo
        tier: frontend
    spec:
      containers:
        - image: 467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx:latest
          imagePullPolicy: Always
          name: demo
          ports:
            - containerPort: 80
              name: demo
          volumeMounts:
          - mountPath: /data
            name: data
      volumes:
        - name: data
          persistentVolumeClaim:
            claimName: nginxdata
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nginxdata
  namespace: demo
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 2Gi
  storageClassName: ebs-sc

~# kubectl apply -f sample-app.yaml
namespace/demo created
service/demo-service created
deployment.apps/demo-deployment created
persistentvolumeclaim/nginxdata created

 

 

We check that the deployment was successful:

 

 

~# kubectl get all,pvc -n demo
NAME                                   READY   STATUS    RESTARTS   AGE
pod/demo-deployment-687897c95f-dd99s   1/1     Running   0          4m31s

NAME                   TYPE           CLUSTER-IP     EXTERNAL-IP                                                              PORT(S)        AGE
service/demo-service   LoadBalancer   10.100.230.2   a1b269d5b5f124c1492ff9b041d46f95-855142063.eu-west-1.elb.amazonaws.com   80:32690/TCP   4m31s

NAME                              READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/demo-deployment   1/1     1            1           4m31s

NAME                                         DESIRED   CURRENT   READY   AGE
replicaset.apps/demo-deployment-687897c95f   1         1         1       4m31s

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginxdata   Bound    pvc-619cf182-731b-4134-a9af-2f15548a4140   2Gi        RWO            ebs-sc         4m31s

 

 

And that the NGINX container image was pulled from the correct repository:

 

 

~# kubectl describe pod/demo-deployment-687897c95f-dd99s -n demo | grep “Image:”
    Image:          467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx

 

 

Manage and protect demo application in ACS

The EKS cluster demo1-euwest1 on which we deployed the demo application is already managed by Astra Control Service. Therefore we can manage the demo application with Astra Control Service simply by defining its namespace as an application in ACS.

PatricU_19-1689945761534.png

Figure 3: Managing the demo application in ACS.

To regularly protect the demo application, we create a protection schedule with hourly backups to an AWS S3 bucket:

PatricU_20-1689945761544.png

Figure 4: Protection policy with regular backups for demo application.

 

Install Astra hook components

To change the container image URL from region eu-west-1 to region eu-north-1 after a restore, we add a modified post-restore hook from our collection of example execution hooks in the Verda GitHub project. There, the post-restore URL-rewrite hook can be adapted for our purpose. It consists of two parts: the actual post-restore execution hook script url-rewrite.sh, which swaps all container images between two regions when invoked, and a hook execution container definition rewrite-infra.yaml, which we need to modify to fit into our environment.

 

Because the url-rewrite.sh execution hook needs to run in a container with the K8s CLI installed, the helper tool deploys a generic Alpine container and installs the K8s CLI. It also installs a ServiceAccount and RoleBinding with the necessary permissions in the application namespace.

 

To adapt and deploy the hook components, we clone the Verda GitHub repository and change into the Verda/URL-rewrite directory:

 

 

~# git clone https://github.com/NetApp/Verda.git
Cloning into 'Verda'...
remote: Enumerating objects: 206, done.
remote: Counting objects: 100% (55/55), done.
remote: Compressing objects: 100% (47/47), done.
remote: Total 206 (delta 24), reused 22 (delta 8), pack-reused 151
Receiving objects: 100% (206/206), 64.63 KiB | 12.93 MiB/s, done.
Resolving deltas: 100% (90/90), done.
~# cd Verda/URL-rewrite

 

 

First, we need to adapt the manifest for the helper tools to our sample application. We make sure that the namespace values are set to the namespace demo of the sample app and that the labels fit our application needs:

 

 

~# cat rewrite-infra.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubectl-ns-admin-sa
  namespace: demo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubectl-ns-admin-sa
  namespace: demo
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
- kind: ServiceAccount
  name: kubectl-ns-admin-sa
  namespace: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: astra-hook-deployment
  namespace: demo
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      serviceAccountName: kubectl-ns-admin-sa
      containers:
      - name: alpine-astra-hook
        image: alpine:latest
        env:
          - name: KUBECTL_VERSION
            value: "1.23.9"
        command: ["/bin/sh"]
        args:
        - -c
        - >
          apk add curl jq &&
          curl -sLO https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl &&
          mv kubectl /usr/bin/kubectl &&
          chmod +x /usr/bin/kubectl &&
          trap : TERM INT; sleep infinity & wait

 

 

Assuming that we’re not allowed to use a public repository for the Alpine image, we also create a (replicated) private repository for the Alpine image. We push it there from our local repository, following the same steps as above for the NGINX image:

 

 

~# docker push 467886448844.dkr.ecr.eu-west-1.amazonaws.com/alpine:latest
The push refers to repository [467886448844.dkr.ecr.eu-west-1.amazonaws.com/alpine]
78a822fe2a2d: Pushed
latest: digest: sha256:25fad2a32ad1f6f510e528448ae1ec69a28ef81916a004d3629874104f8a7f70 size: 528

 

 

 We find that the repositoryUri of the alpine ECR repository on the DR site eu-north-1 is 467886448844.dkr.ecr.eu-north-1.amazonaws.com/alpine:

 

 

~# aws ecr describe-repositories --region eu-north-1 --repository-name alpine
{
    "repositories": [
        {
            "repositoryArn": "arn:aws:ecr:eu-north-1:467886448844:repository/alpine",
            "registryId": "467886448844",
            "repositoryName": "alpine",
            "repositoryUri": "467886448844.dkr.ecr.eu-north-1.amazonaws.com/alpine",
            "createdAt": "2023-06-26T12:54:40+02:00",
            "imageTagMutability": "MUTABLE",
            "imageScanningConfiguration": {
                "scanOnPush": false
            },
            "encryptionConfiguration": {
                "encryptionType": "AES256"
            }
        }
    ]
}

 

 

And we confirm that the Alpine image was replicated successfully to the DR site:

 

 

~# aws ecr list-images --repository-name alpine --region eu-north-1
{
    "imageIds": [
        {
            "imageDigest": "sha256:25fad2a32ad1f6f510e528448ae1ec69a28ef81916a004d3629874104f8a7f70",
            "imageTag": "latest"
        }
    ]
}

 

 

The updated manifest for the helper tools with the location of the Alpine image on the DR site is:

 

 

~# cat rewrite-infra-ECR-DR.yaml
apiVersion: v1
kind: ServiceAccount
metadata:
  name: kubectl-ns-admin-sa
  namespace: demo
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
  name: kubectl-ns-admin-sa
  namespace: demo
roleRef:
  apiGroup: rbac.authorization.k8s.io
  kind: ClusterRole
  name: admin
subjects:
- kind: ServiceAccount
  name: kubectl-ns-admin-sa
  namespace: demo
---
apiVersion: apps/v1
kind: Deployment
metadata:
  name: astra-hook-deployment
  namespace: demo
  labels:
    app: demo
spec:
  replicas: 1
  selector:
    matchLabels:
      app: demo
  template:
    metadata:
      labels:
        app: demo
    spec:
      serviceAccountName: kubectl-ns-admin-sa
      containers:
      - name: alpine-astra-hook
        image: 467886448844.dkr.ecr.eu-north-1.amazonaws.com/alpine:latest
        env:
          - name: KUBECTL_VERSION
            value: "1.23.9"
        command: ["/bin/sh"]
        args:
        - -c
        - >
          apk add curl jq &&
          curl -sLO https://storage.googleapis.com/kubernetes-release/release/v${KUBECTL_VERSION}/bin/linux/amd64/kubectl &&
          mv kubectl /usr/bin/kubectl &&
          chmod +x /usr/bin/kubectl &&
          trap : TERM INT; sleep infinity & wait

 

 

 We can now deploy the hook components into the namespace of the sample application and confirm that the helper pod is running:

 

 

~# kubectl apply -f rewrite-infra-ECR-DR.yaml
serviceaccount/kubectl-ns-admin-sa created
rolebinding.rbac.authorization.k8s.io/kubectl-ns-admin-sa created
deployment.apps/astra-hook-deployment created

~# kubectl get all,pvc -n demo
NAME                                         READY   STATUS    RESTARTS   AGE
pod/astra-hook-deployment-6979d88447-h2gdd   1/1     Running   0          3s
pod/demo-deployment-656d5f8d76-lxc7z         1/1     Running   0          74s

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                               PORT(S)        AGE
service/demo-service   LoadBalancer   10.100.186.156   a24c3420eebf5439e94e8d4086c7de9c-1238393128.eu-west-1.elb.amazonaws.com   80:31642/TCP   74s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/astra-hook-deployment   1/1     1            1           3s
deployment.apps/demo-deployment         1/1     1            1           74s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/astra-hook-deployment-6979d88447   1         1         1       4s
replicaset.apps/demo-deployment-656d5f8d76         1         1         1       75s

 

 

 After confirming that the Alpine image was pulled from the private ECR repository on the DR site, we’re all set to add the post-restore URL-rewrite hook to the sample application in ACS:

 

 

~# kubectl -n demo describe pod/astra-hook-deployment-6979d88447-h2gdd |grep Image:
    Image:         467886448844.dkr.ecr.eu-north-1.amazonaws.com/alpine:latest

 

 

 Add post-restore execution hook

First, we need to upload the post-restore URL-rewrite hook script to the library of execution hooks in our Astra Control account. In Account -> Scripts, click Add:

PatricU_21-1689945761564.png

Figure 5: Add post-restore URL-rewrite hook to ACS account.

Then we upload the url-rewrite.sh script from the cloned Verda repository on our laptop to Astra Control and name the script accordingly:

PatricU_22-1689945761570.png

Figure 6: Upload hook script from cloned Verda GitHub repository.

In the Application Details view of the demo application in Astra Control, we can now add the post-restore hook under the Execution Hooks tab from the script library of Astra Control:

PatricU_23-1689945761588.png

Figure 7: Add post-restore execution hook to demo application.

In the next screen, we configure the post-restore hook with these details:

  1. Operation:
    • Select Post-restore from the drop-down list.
  2. Hook arguments (mandatory for this specific hook):
    • region A (467886448844.dkr.ecr.eu-west-1.amazonaws.com) and region B (467886448844.dkr.ecr.eu-north-1.amazonaws.com). Order does not matter.
  3. Hook name:
    • Enter a unique name for the hook.
  4. Hook filter (defines the container in which the hook script will be executed):
    • Hook filter type: Select Container Name from the dropdown list.
    • Enter alpine-astra-hook as a regular expression in Regular Expression 2 (RE2) syntax.

 

PatricU_24-1689945761593.png

Figure 8 : Configuring the post-restore URL-rewrite hook.

Now we select the url-rewrite.sh script from the list of available hook scripts in our Astra Control account:

  PatricU_25-1689945761595.png

Figure 9: Add hook script.

After a final review of the hook configuration, we add the hook to the demo application:

PatricU_26-1689945761598.png

Figure 10: Final check of hook configuration.

In the details view of the execution hook, we can check that we set the container image match rules correctly and that the post-restore URL-rewrite hook will be executed in the alpine-astra-hook container:

  PatricU_27-1689945761604.png

Figure 11: Confirm container match of post-restore hook.

Test application restore

Now we can test application restores to the DR site eu-north-1 and locally to the same cluster.

Restore to DR site

To test a restore to the DR site, we add a second cluster demo2-eunorth1 in the DR location eu-north-1 to ACS and manage it:

PatricU_28-1689945761608.png

Figure 12:  Adding cluster in DR location to ACS.

 

In the Data Protection tab of the Application Details view of our demo application, we select from the list of backups the backup that we want to use for the restore:

PatricU_29-1689945761613.png

Figure 13: Starting to restore the demo app from backup.

In the next screen, we select the demo2-eunorth1 cluster as the destination cluster from the dropdown list and enter demo as the destination namespace:

PatricU_30-1689945761620.png

Figure 14: Specifying the backup destination.

In the final summary screen, we confirm that the post-restore URL-rewrite hook is part of the restore process and then we start the restore:

PatricU_31-1689945761627.png

Figure 15: Checking restore settings.

Because there’s not much persistent data stored in the demo app’s PV, the restore finishes in a couple of seconds. In the Astra Control activity log we can confirm that the post-restore URL-rewrite execution hook was successfully executed after the restore in the correct container in 3.9s:

 

 

Clone/restore from managed application 'demo' in cluster 'demo1-euwest1' to application 'demo' in cluster 'demo2-eunorth1' started.

Timestamp: 2023/07/07 13:07 UTC

 

 

 

 

Managed application 'demo' in cluster 'demo2-eunorth1' was cloned/restored from application 'demo' in cluster 'demo1-euwest1'. Duration: 24.3s

Timestamp: 2023/07/07 13:07 UTC

Execution hook 'post-restore-url-rewrite' is now running as part of the post stage of the restore operation for managed application 'demo' in cluster 'demo2-eunorth1'.  It is part of an adhoc restore operation. The hook source 'url-rewrite.sh' with checksum '8dfc77f4a5e78e5aa4451fffb37d5108' is running with the args ["467886448844.dkr.ecr.eu-west-1.amazonaws.com", "467886448844.dkr.ecr.eu-north-1.amazonaws.com"]. The hook is running on container 'alpine-astra-hook' with image '467886448844.dkr.ecr.eu-north-1.amazonaws.com/alpine:latest' on pod 'astra-hook-deployment-6979d88447-hcxz7' in namespace 'demo'.

Timestamp: 2023/07/07 13:08 UTC

 

 

 

 

Execution hook 'post-restore-url-rewrite' successfully ran as part of the post stage of the restore operation for managed application 'demo' in cluster 'demo2-eunorth1'. It was part of an adhoc restore operation. The hook source 'url-rewrite.sh' with checksum '8dfc77f4a5e78e5aa4451fffb37d5108' was run with the args ["467886448844.dkr.ecr.eu-west-1.amazonaws.com", "467886448844.dkr.ecr.eu-north-1.amazonaws.com"]. The hook ran on container 'alpine-astra-hook' with image '467886448844.dkr.ecr.eu-north-1.amazonaws.com/alpine:latest' on pod 'astra-hook-deployment-6979d88447-hcxz7' in namespace 'demo'. Duration: 3.9s

Timestamp: 2023/07/07 13:08 UTC

 

 

The demo app comes up on the DR cluster demo2-eunorth1:

 

 

~# kubectl get all,pvc -n demo
NAME                                         READY   STATUS    RESTARTS   AGE
pod/astra-hook-deployment-6979d88447-w96gp   1/1     Running   0          73s
pod/demo-deployment-6c8b5598c8-gs5kl         1/1     Running   0          48s

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                              PORT(S)        AGE
service/demo-service   LoadBalancer   10.100.180.206   a090197b6fed343c2903feb5a2861040-626015765.eu-north-1.elb.amazonaws.com   80:31523/TCP   74s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/astra-hook-deployment   1/1     1            1           73s
deployment.apps/demo-deployment         1/1     1            1           73s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/astra-hook-deployment-6979d88447   1         1         1       73s
replicaset.apps/demo-deployment-649db8994          0         0         0       73s
replicaset.apps/demo-deployment-6b48fbcb5d         0         0         0       73s
replicaset.apps/demo-deployment-6c8b5598c8         1         1         1       48s

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginxdata   Bound    pvc-571796d0-a40c-467a-bdae-e4c21cafb291   2Gi        RWO            ebs-sc         117s

 

 

 Checking the container image of the sample application, we see that it was pulled from the nginx ECR repository on the DR site eu-north-1:

 

 

~# kubectl -n demo describe pod/demo-deployment-6c8b5598c8-gs5kl | grep Image:
    Image:          467886448844.dkr.ecr.eu-north-1.amazonaws.com/nginx:latest

 

 

Local restore

When doing a local restore, either to the same cluster demo1-euwest1 or to another cluster in the same location, the post-restore URL-rewrite hook will also be executed.  In the following example, we restored the sample application to the namespace demo-restore1 on the primary cluster. We see that the post-restore hook changed the image URL to the DR repository and the image was pulled from there:

 

 

~# kubectl get all,pvc -n demo-restore1
NAME                                         READY   STATUS    RESTARTS   AGE
pod/astra-hook-deployment-6979d88447-6dcbw   1/1     Running   0          29s
pod/demo-deployment-6c8b5598c8-sl6xw         1/1     Running   0          14s

NAME                   TYPE           CLUSTER-IP       EXTERNAL-IP                                                             PORT(S)        AGE
service/demo-service   LoadBalancer   10.100.251.170   ab54b8e9bb9eb420da6d9bfb56941da4-12997857.eu-west-1.elb.amazonaws.com   80:31116/TCP   29s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/astra-hook-deployment   1/1     1            1           30s
deployment.apps/demo-deployment         1/1     1            1           30s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/astra-hook-deployment-6979d88447   1         1         1       30s
replicaset.apps/demo-deployment-649db8994          0         0         0       30s
replicaset.apps/demo-deployment-6b48fbcb5d         0         0         0       30s
replicaset.apps/demo-deployment-6c8b5598c8         1         1         1       15s

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginxdata   Bound    pvc-2dd977df-95d0-495e-89c5-6ed0137c2110   2Gi        RWO            ebs-sc         31s
~# kubectl -n demo-restore1 describe pod/demo-deployment-6c8b5598c8-sl6xw | grep Image:
    Image:          467886448844.dkr.ecr.eu-north-1.amazonaws.com/nginx:latest

 

 

 If pulling the container images for local restores is not wanted, the post-restore URL-rewrite hook can be disabled in Astra Control before restoring in the Execution Hooks tab of the Application View in the UI:

PatricU_32-1689945761631.png

Figure 16: Disabling the post-restore execution hook.

Now the post-restore hook will not be executed after a restore:

PatricU_33-1689945761637.png

Figure 17: Restoring to the primary cluster with disabled post-restore hook.

So after restoring locally to namespace demo-restore2 with the post-restore hook disabled, the container images are pulled from the ECR repository on the primary site:

 

 

~# kubectl get all,pvc -n demo-restore2
NAME                                         READY   STATUS    RESTARTS   AGE
pod/astra-hook-deployment-6979d88447-x9tlp   1/1     Running   0          2m40s
pod/demo-deployment-649db8994-gdkws          1/1     Running   0          2m39s

NAME                   TYPE           CLUSTER-IP      EXTERNAL-IP    
                                                           PORT(S)        AGE
service/demo-service   LoadBalancer   10.100.219.96   a7a8e86b33fa944769fc725ce14e3b86-1039386188.eu-west-1.elb.amazonaws.com   80:32072/TCP   2m40s

NAME                                    READY   UP-TO-DATE   AVAILABLE   AGE
deployment.apps/astra-hook-deployment   1/1     1            1           2m40s
deployment.apps/demo-deployment         1/1     1            1           2m39s

NAME                                               DESIRED   CURRENT   READY   AGE
replicaset.apps/astra-hook-deployment-6979d88447   1         1         1       2m40s
replicaset.apps/demo-deployment-649db8994          1         1         1       2m39s
replicaset.apps/demo-deployment-6b48fbcb5d         0         0         0       2m39s

NAME                              STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
persistentvolumeclaim/nginxdata   Bound    pvc-4f003afa-d580-4416-839f-50ea9a667250   2Gi        RWO            ebs-sc         2m40s
~# kubectl -n demo-restore2 describe pod/demo-deployment-649db8994-gdkws | grep Image:
    Image:          467886448844.dkr.ecr.eu-west-1.amazonaws.com/nginx:latest

 

 

Conclusion

In certain scenarios, it’s crucial to change K8s application definitions after a restore. With its execution hooks framework, Astra Control offers custom actions that can be configured to run in conjunction with a data protection operation of a managed app.

 

Astra Control supports the following types of execution hooks, based on when they can be run:

  • Pre-snapshot
  • Post-snapshot
  • Pre-backup
  • Post-backup
  • Post-restore

The Verda GitHub project contains a collection of example execution hooks for various applications and scenarios.

 

In this blog post we showed how to leverage execution hooks to change the image URL of container images after an application restore to a DR site with a different repository URL by following the sample post-restore URL-rewrite hook in Verda. The same mechanisms can also be used to change an Ingress configuration after a restore.

Take advantage of NetApp’s continuing innovation

To see for yourself how easy it is to protect persistent Kubernetes applications with Astra Control, by using either its UI or the powerful Astra Toolkit, apply for a free trial. Get started today!

Public