Tech ONTAP Blogs

Unlocking the power of NetApp ASA r2 systems for Kubernetes block storage

Aparna
NetApp
277 Views

As organizations are adopting Kubernetes at scale, simplicity has become one of the key design principles. Users want to do more with their limited time and resources. This is where the NetApp® ASA r2 systems help users: by simplifying their storage footprint for Kubernetes applications. In this article, we’ll explore the cutting-edge features of the ASA r2 systems and demonstrate how to seamlessly integrate them with your Kubernetes workloads. So, let's dive in.

 

Introducing the ASA r2 systems

 

The NetApp ASA r2 systems, which include the NetApp ASA A1K, ASA A90, ASA A70, ASA A50, ASA A30, and ASA A20 models, offer a unified hardware and software solution tailored to the specific needs of SAN-only customers. Built on the new NetApp ONTAP® design center architecture, these systems introduce a revolutionary approach to storage provisioning, making them an ideal choice for modern Kubernetes workloads.

 

A disaggregated system design

 

One of the key differentiators of the ASA r2 systems is their disaggregated architecture. Unlike traditional systems, they don’t expose the concept of aggregates to the user. Instead, these systems treat LUN (logical unit number) as a first-class citizen, eliminating the need for wrapping LUNs inside volumes. This innovative design enables a simplified and streamlined experience for managing block storage.

 

NetApp Trident and CSI integration

 

To enable integration with ASA r2 systems, NetApp Tridentsoftware, the Container Storage Interface (CSI)–compliant storage orchestrator, now supports provisioning block storage using the iSCSI protocol starting with the 25.02 release. The best part is that if you’re an existing Trident customer, you can seamlessly transition to using ASA r2 systems with minimal changes. All you need to do is provide the right credentials in the Trident backend, and everything else is dynamically determined.

 

Simplified provisioning process

 

Whether you’re a new Trident user or an existing one, the process of provisioning storage with Trident is straightforward. Here are the simple steps.

 

  • Step 1: Create a Trident backend. Start by creating a Trident backend with the storage driver ontap-san. You can do this either by configuring the TridentBackendConfig (TBC) custom resource definition using the Kubernetes-native approach or by using a custom JSON file with tridentctl, a command-line utility for managing Trident. The configuration is similar to any other ontap-san backend, with the only changes being the user name and password. You can either use cluster management IP with administrator credentials or specify a specific a storage virtual machine (SVM) with its management IP and credentials.

 

# Kubernetes secret required for creating Trident backend from TBC
[root@scs000571921-1 demo]# cat secret.yaml 
apiVersion: v1
kind: Secret
metadata:
  name: asar2-backend-secret
type: Opaque
stringData:
  username: <username>
  password: <password>

[root@scs000571921-1 demo]# kubectl create -f secret.yaml -n trident
secret/asar2-backend-secret created

[root@scs000571921-1 demo]# kubectl get secret asar2-backend-secret -n trident
NAME                   TYPE     DATA   AGE
asar2-backend-secret   Opaque   2      89s

# Kubernetes CR TridentBackendConfig (TBC)
[root@scs000571921-1 demo]# cat trident-backend-config.yaml 
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
  name: asar2-san-backend-tbc
spec:
  version: 1
  backendName: asar2-san-backend
  storageDriverName: ontap-san
  managementLIF: 1.1.1.1
  svm: svm0
  credentials:
    name: asar2-backend-secret

# Or, Trident backend json
[root@scs000571921-1 demo]# cat backend.json 
{
   "version": 1,
   "storageDriverName": "ontap-san",
   "backendName": "asar2-san-backend",
   "managementLIF": "1.1.1.1",
   "svm": "svm0",
   "username": "<username>",
   "password": "<password>"
}

 

  • Step 2: Add the backend. Once the backend is configured, you can add it to Trident using either kubectl or tridentctl. These tools provide a convenient way to add the newly configured backend to Trident and make it available for use.

 

# Create Trident Backend via kubectl
[root@scs000571921-1 demo]# kubectl create -f trident-backend-config.yaml -n trident
tridentbackendconfig.trident.netapp.io/asar2-san-backend-tbc created

[root@scs000571921-1 demo]# kubectl get tbc -n trident
NAME                    BACKEND NAME        BACKEND UUID                           PHASE   STATUS
asar2-san-backend-tbc   asar2-san-backend   44ab00c8-e24f-4a02-977a-566c08f13654   Bound   Success

[root@scs000571921-1 demo]# tridentctl -n trident get b
+-------------------+----------------+--------------------------------------+--------+------------+---------+
|       NAME        | STORAGE DRIVER |                 UUID                 | STATE  | USER-STATE | VOLUMES |
+-------------------+----------------+--------------------------------------+--------+------------+---------+
| asar2-san-backend | ontap-san      | 44ab00c8-e24f-4a02-977a-566c08f13654 | online | normal     |       0 |
+-------------------+----------------+--------------------------------------+--------+------------+---------+

# Or, create Trident Backend via tridentctl

[root@scs000571921-1 demo]# tridentctl create backend -f backend.json -n trident
+-------------------+----------------+--------------------------------------+--------+------------+---------+
|       NAME        | STORAGE DRIVER |                 UUID                 | STATE  | USER-STATE | VOLUMES |
+-------------------+----------------+--------------------------------------+--------+------------+---------+
| asar2-san-backend | ontap-san      | 462d2965-68e2-492b-83d6-ee65ba79af96 | online | normal     |       0 |
+-------------------+----------------+--------------------------------------+--------+------------+---------+

 

  • Step 3: Define a storage class. Create a storage class that corresponds to the type of storage driver you require. This step allows you to define the characteristics of the storage you want to dynamically provision.

 

[root@scs000571921-1 demo]# cat sc.yaml 
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: asar2-san-sc
parameters:
  backendType: ontap-san
  storagePools: "asar2-san-backend:.*"
provisioner: csi.trident.netapp.io

[root@scs000571921-1 demo]# kubectl create -f sc.yaml 
storageclass.storage.k8s.io/asar2-san-sc created

[root@scs000571921-1 demo]# kubectl get sc
NAME           PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
asar2-san-sc   csi.trident.netapp.io   Delete          Immediate           false                  2m

 

Now that everything is ready for dynamic provisioning of storage on the ASA r2 system, let's see an example of how to use it.

 

  • Step 4: Create a PVC. Define a PersistentVolumeClaim (PVC) that specifies the amount of storage you need and references the appropriate storage class. This step ensures that your Kubernetes application has access to the required block storage.

 

[root@scs000571921-1 demo]# cat pvc.yaml 
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: asar2-san-pvc
spec:
  accessModes:
  - ReadWriteOnce
  resources:
    requests:
      storage: 1Gi
  storageClassName: asar2-san-sc

[root@scs000571921-1 demo]# kubectl create -f pvc.yaml 
persistentvolumeclaim/asar2-san-pvc created

 

  • Step 5: Confirm PVC binding. After the PVC is created, verify that it’s successfully bound to a persistent volume (PV). This confirmation ensures that the block storage is ready for use by your applications.

 

[root@scs000571921-1 demo]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
asar2-san-pvc   Bound    pvc-23b443f7-3df1-4d15-a106-4f407b8ff4cf   1Gi        RWO            asar2-san-sc   2m5s

 

  • Step 6: Use the PVC. Congratulations! You’re now ready to use the PVC in any pod of your choice. Mount the PVC in your pod's specification, and your application will have seamless access to the high-performance block storage provided by the ASA r2 systems.

 

[root@scs000571921-1 demo]# cat pod.yaml 
apiVersion: v1
kind: Pod
metadata:
  name: asa-r2-san-pod
spec:
  containers:
  - image: nginx:alpine
    name: nginx
    volumeMounts:
    - mountPath: mnt/pvc
      name: local-storage
  nodeSelector:
    kubernetes.io/arch: amd64
    kubernetes.io/os: linux
  volumes:
  - name: local-storage
    persistentVolumeClaim:
      claimName: asar2-san-pvc

[root@scs000571921-1 demo]# kubectl create -f pod.yaml 
pod/asa-r2-san-pod created

[root@scs000571921-1 demo]# kubectl get po
NAME             READY   STATUS    RESTARTS   AGE
asa-r2-san-pod   1/1     Running   0          41s

[root@scs000571921-1 demo]# kubectl exec -it asa-r2-san-pod -- /bin/ash -c "mount | fgrep mnt/pvc" 
/dev/mapper/3600a098078304b2d793f587a79497556 on /mnt/pvc type ext4 (rw,seclabel,relatime,stripe=16)

 

Troubleshooting made easy

 

If you encounter any issues, we've got you covered with some handy commands for troubleshooting. Use kubectl describe on the problematic resource to gather detailed information. For more insights into the system's behavior, you can check the Trident logs by using kubectl or tridentctl.

 

[root@scs000571921-1 demo]# kubectl describe pvc asar2-san-pvc
Name:          asar2-san-pvc
Namespace:     default
StorageClass:  asar2-san-sc
Status:        Bound
Volume:        pvc-23b443f7-3df1-4d15-a106-4f407b8ff4cf
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.trident.netapp.io
               volume.kubernetes.io/storage-provisioner: csi.trident.netapp.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                 Age                    From                                                                                            Message
  ----    ------                 ----                   ----                                                                                            -------
  Normal  Provisioning           7m52s                  csi.trident.netapp.io_trident-controller-7b8bc4df65-q6gzd_a47e7fca-4039-44c1-9a3e-0ce1abf8659b  External provisioner is provisioning volume for claim "default/asar2-san-pvc"
  Normal  ExternalProvisioning   7m51s (x3 over 7m52s)  persistentvolume-controller                                                                     waiting for a volume to be created, either by external provisioner "csi.trident.netapp.io" or manually created by system administrator
  Normal  ProvisioningSuccess    7m37s                  csi.trident.netapp.io                                                                           provisioned a volume
  Normal  ProvisioningSucceeded  7m37s                  csi.trident.netapp.io_trident-controller-7b8bc4df65-q6gzd_a47e7fca-4039-44c1-9a3e-0ce1abf8659b  Successfully provisioned volume pvc-23b443f7-3df1-4d15-a106-4f407b8ff4cf
[root@scs000571921-1 demo]# kubectl get pvc
NAME            STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
asar2-san-pvc   Bound    pvc-23b443f7-3df1-4d15-a106-4f407b8ff4cf   1Gi        RWO            asar2-san-sc   7m58s
[root@scs000571921-1 demo]# kubectl describe pvc asar2-san-pvc
Name:          asar2-san-pvc
Namespace:     default
StorageClass:  asar2-san-sc
Status:        Bound
Volume:        pvc-23b443f7-3df1-4d15-a106-4f407b8ff4cf
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.trident.netapp.io
               volume.kubernetes.io/storage-provisioner: csi.trident.netapp.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                 Age                  From                                                                                            Message
  ----    ------                 ----                 ----                                                                                            -------
  Normal  Provisioning           8m4s                 csi.trident.netapp.io_trident-controller-7b8bc4df65-q6gzd_a47e7fca-4039-44c1-9a3e-0ce1abf8659b  External provisioner is provisioning volume for claim "default/asar2-san-pvc"
  Normal  ExternalProvisioning   8m3s (x3 over 8m4s)  persistentvolume-controller                                                                     waiting for a volume to be created, either by external provisioner "csi.trident.netapp.io" or manually created by system administrator
  Normal  ProvisioningSuccess    7m49s                csi.trident.netapp.io                                                                           provisioned a volume
  Normal  ProvisioningSucceeded  7m49s                csi.trident.netapp.io_trident-controller-7b8bc4df65-q6gzd_a47e7fca-4039-44c1-9a3e-0ce1abf8659b  Successfully provisioned volume pvc-23b443f7-3df1-4d15-a106-4f407b8ff4cf

[root@scs000571921-1 demo]# kubectl -n trident get po
NAME                                  READY   STATUS    RESTARTS   AGE
trident-controller-7b8bc4df65-q6gzd   6/6     Running   0          37h
trident-node-linux-2hxgp              2/2     Running   0          37h
trident-node-linux-4n8pn              2/2     Running   0          37h

[root@scs000571921-1 demo]# kubectl logs trident-controller-7b8bc4df65-q6gzd -n trident
Defaulted container "trident-main" out of: trident-main, trident-autosupport, csi-provisioner, csi-attacher, csi-resizer, csi-snapshotter
time="2025-03-11T21:19:49Z" level=debug msg="Node updated in cache." logLayer=csi_frontend name=scs000571921-1 requestID=91d2ec94-41c3-432e-b0bf-12663b56bc97 requestSource=Kubernetes workflow="node=update"

[root@scs000571921-1 demo]# tridentctl -n trident logs
trident-controller log:
time="2025-03-11T21:19:49Z" level=debug msg="Node updated in cache." logLayer=csi_frontend name=scs000571921-1 requestID=91d2ec94-41c3-432e-b0bf-12663b56bc97 requestSource=Kubernetes workflow="node=update"
time="2025-03-11T21:19:49Z" level=debug msg="Node updated in cache." logLayer=csi_frontend name=scs000571921-2 requestID=998d09a6-b644-407c-9f41-99e4c136c097 requestSource=Kubernetes workflow="node=update"
time="2025-03-11T21:20:04Z" level=debug msg="Node updated in cache." logLayer=csi_frontend name=scs000571921-1 requestID=912fa020-21f3-4e15-8c43-1c89ed54db80 requestSource=Kubernetes workflow="node=update"

 

Conclusion

 

With the power of NetApp ASA r2 systems and the integration provided by NetApp Trident, provisioning block storage for your Kubernetes applications has never been easier. We hope this article has provided you with the knowledge and confidence to configure your Kubernetes workloads on ASA r2 systems. Happy configuring, and may your Kubernetes journey be smooth and successful.

Public