Tech ONTAP Blogs

Unlocking the power of NetApp ASA r2 systems for Kubernetes block storage with NVMe/TCP

hnikhil
NetApp
339 Views

As organizations are adopting Kubernetes at scale, simplicity has become one of the key design principles. Users want to do more with their limited time and resources. This is where the NetApp® ASA r2 systems help users: by simplifying their storage footprint for Kubernetes applications.

 

NetApp Trident software, our Container Storage Interface (CSI)–compliant storage orchestrator, already supports these systems using the iSCSI protocol. With Trident 25.06, support has been extended for NVMe over TCP (NVMe/TCP). This support can provide further scalability and performance benefits of NVMe/TCP with ASA r2 systems.

In this article, we’ll explore the cutting-edge features of the ASA r2 systems, demonstrate how to seamlessly integrate them with your Kubernetes workloads, and explore how to use them in Trident with NVMe/TCP. So, let's dive in.

 

Introducing the ASA r2 systems

 

Before we begin, let’s look at what ASA r2 systems are and, at a high level, what they offer. The NetApp ASA r2 systems, which include the NetApp ASA A1K, ASA A90, ASA A70, ASA A50, ASA A30, and ASA A20 models, offer a unified hardware and software solution tailored to the specific needs of SAN-only customers. Built on the new NetApp ONTAP® design center architecture, these systems introduce a revolutionary approach to storage provisioning, which makes them an ideal choice for modern Kubernetes workloads.

One of the key differentiators of the ASA r2 systems is their disaggregated architecture. Unlike traditional systems, they don’t expose the concept of aggregates to the user. Instead, these systems treat the NVMe namespace as a first-class citizen, eliminating the need for wrapping NVMe namespaces inside volumes. This innovative design enables a simplified, streamlined experience for managing block storage.

 

Trident and CSI integration

 

To enable integration with ASA r2 systems, Trident 25.06 now supports provisioning block storage using the NVMe/TCP. The best part is that if you’re an existing Trident customer, you can transition to using ASA r2 systems with minimal changes. All you need to do is provide the right credentials in the Trident back end, and everything else is dynamically determined.

 

Simple provisioning process

 

Whether or not you’re a new Trident user, the process of provisioning storage with Trident is straightforward. Here are the simple steps.

 

  • Step 1: Create a Trident back end. Start by creating a Trident back end with the storage driver ontap-san. You can do this either by configuring the TridentBackendConfig (TBC) custom resource definition using the Kubernetes-native approach or by using a custom JSON file with tridentctl, a command-line utility for managing Trident. The configuration is similar to any other ontap-san NVMe back end, with the only changes being the user name and password. You can either use a cluster management IP with administrator credentials or specify a specific storage virtual machine (SVM) with its management IP and credentials. For more information, refer to ONTAP SAN driver details in the Trident documentation.

    # Kubernetes secret required for creating Trident backend from TBC
    [root@scs000711920 demo]# cat secret.yaml
    apiVersion: v1
    kind: Secret
    metadata:
      name: asa-r2-nvme-secret
    type: Opaque
    stringData:
      username: <username>
      password: <password>
    
    [root@scs000711920 demo]# kubectl create -f secret.yaml -n trident
    secret/asa-r2-nvme-secret created
    
    [root@scs000711920 demo]# kubectl get secret asa-r2-nvme-secret -n trident
    NAME                TYPE     DATA   AGE
    asa-r2-nvme-secret   Opaque   2      9s
    
    # Kubernetes CR TridentBackendConfig (TBC)
    [root@scs000711920 demo]# cat tbc.yaml 
    apiVersion: trident.netapp.io/v1
    kind: TridentBackendConfig
    metadata:
      name: asa-r2-nvme-backend-tbc
    spec:
      version: 1
      backendName: asa-r2-nvme-backend
      storageDriverName: ontap-san
      managementLIF: 1.1.1.1
      svm: svm0
      sanType: nvme
      credentials:
        name: asa-r2-nvme-secret
    
    # Or, Trident backend json
    [root@scs000711920 demo]# cat backend.json 
    {
        "version": 1,
        "storageDriverName": "ontap-san",
        "managementLIF": "1.1.1.1",
        "backendName": "asa-r2-nvme-backend"
        "svm": "svm0",
        "username": "<username>",
        "password": "<password>",
        "sanType": "nvme"
    }​

     

  • Step 2: Add the back end. Once the back end is configured, you can add it to Trident using either kubectl or tridentctl. These tools provide a convenient way to add the newly configured back end to Trident and make it available for use.

    # Create Trident Backend via kubectl
    [root@scs000711920 demo]# kubectl create -f tbc.yaml -n trident
    tridentbackendconfig.trident.netapp.io/asa-r2-nvme-backend-tbc created
    
    [root@scs000711920 demo]# kubectl get tbc asa-r2-nvme-backend-tbc -n trident
    NAME                     BACKEND NAME          BACKEND UUID                           PHASE   STATUS
    asa-r2-nvme-backend-tbc   asa-r2-nvme-backend   c3ee3907-5dc8-448f-abe6-d5c7621481eb   Bound   Success
    
    [root@scs000711920 demo]# tridentctl -n trident get b
    +---------------------+----------------+--------------------------------------+--------+------------+---------+
    |        NAME         | STORAGE DRIVER |                 UUID                 | STATE  | USER-STATE | VOLUMES |
    +---------------------+----------------+--------------------------------------+--------+------------+---------+
    | asa-r2-nvme-backend | ontap-san      | 934dbf99-7fb2-4a3b-905f-2a9fb2dce7c2 | online | normal     |       0 |
    +---------------------+----------------+--------------------------------------+--------+------------+---------+
    
    # Or, create Trident Backend via tridentctl
    [root@scs000711920 demo]# tridentctl create b -f backend.json -n trident
    +--------------------+----------------+--------------------------------------+--------+------------+---------+
    |        NAME        | STORAGE DRIVER |                 UUID                 | STATE  | USER-STATE | VOLUMES |
    +--------------------+----------------+--------------------------------------+--------+------------+---------+
    | asa-r2-nvme-backend | ontap-san      | 7111337e-a27e-4cde-8707-0f66892614e2 | online | normal     |       0|
    +--------------------+----------------+--------------------------------------+--------+------------+---------+
    ​

 

  • Step 3: Define a storage class. Create a storage class that corresponds to the type of storage driver you require. This step allows you to define the characteristics of the storage you want to dynamically provision.

     

    [root@scs000711920 demo]# cat sc.yaml 
    apiVersion: storage.k8s.io/v1
    kind: StorageClass
    metadata:
      name: asa-r2-nvme-sc
    parameters:
      backendType: ontap-san
      storagePools: "asa-r2-nvme-backend:.*"
    provisioner: csi.trident.netapp.io
    
    [root@scs000711920 demo]# kubectl create -f sc.yaml 
    storageclass.storage.k8s.io/asa-r2-nvme-sc created
    
    [root@scs000711920 demo]# kubectl get sc
    NAME            PROVISIONER             RECLAIMPOLICY   VOLUMEBINDINGMODE   ALLOWVOLUMEEXPANSION   AGE
    asa-r2-nvme-sc   csi.trident.netapp.io   Delete          Immediate           false                  3s
    ​

     


    Now that everything is ready for dynamic provisioning of storage on the ASA r2 system, let's see an example of how to use it.

 

  • Step 4: Create a PVC. Define a PersistentVolumeClaim (PVC) that specifies the amount of storage you need and references the appropriate storage class. This step ensures that your Kubernetes application has access to the required block storage.

     

    [root@scs000711920 demo]# cat pvc.yaml 
    apiVersion: v1
    kind: PersistentVolumeClaim
    metadata:
      name: asa-r2-nvme-pvc
    spec:
      accessModes:
      - ReadWriteOnce
      resources:
        requests:
          storage: 1Gi
      storageClassName: asa-r2-nvme-sc
    
    [root@scs000711920 demo]# kubectl create -f pvc.yaml 
    persistentvolumeclaim/asa-r2-nvme-pvc created
    ​

 

  • Step 5: Confirm the PVC binding. After the PVC is created, verify that it’s successfully bound to a persistent volume (PV). This confirmation ensures that the block storage is ready for use by your applications.

     

    [root@scs000711920 demo]# kubectl get pvc
    NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
    asa-r2-nvme-pvc   Bound    pvc-e6f4d487-a3fe-45e5-a15c-e557a07b026e   1Gi        RWO            asa-r2-nvme-sc   <unset>                 7s
    ​

 

  • Step 6: Use the PVC. Congratulations! You’re now ready to use the PVC in any pod of your choice. Mount the PVC in your pod’s specification, and your application will have seamless access to the high-performance block storage provided by the ASA r2 systems using NVMe/TCP.

    [root@scs000711920 demo]# cat pod.yaml 
    apiVersion: v1
    kind: Pod
    metadata:
      name: asa-r2-nvme-pod
    spec:
      containers:
      - image: nginx:alpine
        name: nginx
        volumeMounts:
        - mountPath: mnt/pvc
          name: local-storage
      nodeSelector:
        kubernetes.io/arch: amd64
        kubernetes.io/os: linux
      volumes:
      - name: local-storage
        persistentVolumeClaim:
          claimName: asa-r2-nvme-pvc
    
    [root@scs000711920 demo]# kubectl create -f pod.yaml 
    pod/asa-r2-nvme-pod created
    
    [root@scs000711920 demo]# kubectl get po
    NAME             READY   STATUS    RESTARTS   AGE
    asa-r2-nvme-pod   1/1     Running   0          5s
    
    [root@scs000711920 demo]# kubectl exec -it asa-r2-nvme-pod -- /bin/ash -c "mount | fgrep mnt/pvc" 
    /dev/nvme0n1 on /mnt/pvc type ext4 (rw,seclabel,relatime,stripe=256)
    

 

Troubleshooting made easy

 

If you encounter any issues, we’ve got you covered with some handy commands for troubleshooting. Use kubectl describe on the problematic resource to gather detailed information. For more insights into the system’s behavior, you can check the Trident logs by using kubectl or tridentctl.

 

[root@scs000711920 demo]# kubectl get pvc
NAME             STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS    VOLUMEATTRIBUTESCLASS   AGE
asa-r2-nvme-pvc   Bound    pvc-e6f4d487-a3fe-45e5-a15c-e557a07b026e   1Gi        RWO            asa-r2-nvme-sc   <unset>                 7s


[root@scs000711920 demo]# kubectl describe pvc asa-r2-nvme-pvc
Name:          asa-r2-nvme-pvc
Namespace:     default
StorageClass:  asa-r2-nvme-sc
Status:        Bound
Volume:        pvc-e6f4d487-a3fe-45e5-a15c-e557a07b026e
Labels:        <none>
Annotations:   pv.kubernetes.io/bind-completed: yes
               pv.kubernetes.io/bound-by-controller: yes
               volume.beta.kubernetes.io/storage-provisioner: csi.trident.netapp.io
               volume.kubernetes.io/storage-provisioner: csi.trident.netapp.io
Finalizers:    [kubernetes.io/pvc-protection]
Capacity:      1Gi
Access Modes:  RWO
VolumeMode:    Filesystem
Used By:       <none>
Events:
  Type    Reason                 Age   From                                                                                            Message
  ----    ------                 ----  ----                                                                                            -------
  Normal  ExternalProvisioning   16s   persistentvolume-controller                                                                     Waiting for a volume to be created either by the external provisioner 'csi.trident.netapp.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
  Normal  Provisioning           16s   csi.trident.netapp.io_trident-controller-55549b957f-9xrjr_778ac595-1728-4c2d-844a-8879678a67fb  External provisioner is provisioning volume for claim "default/asa-r2-nvme-pvc"
  Normal  ProvisioningSuccess    10s   csi.trident.netapp.io                                                                           provisioned a volume
  Normal  ProvisioningSucceeded  10s   csi.trident.netapp.io_trident-controller-55549b957f-9xrjr_778ac595-1728-4c2d-844a-8879678a67fb  Successfully provisioned volume pvc-e6f4d487-a3fe-45e5-a15c-e557a07b026e

[root@scs000711920 demo]# kubectl -n trident get po
NAME                                  READY   STATUS    RESTARTS   AGE
trident-controller-55549b957f-9xrjr   6/6     Running   0          28h
trident-node-linux-5zk4g              2/2     Running   0          2d

[root@scs000711920 demo]# kubectl logs trident-controller-55549b957f-9xrjr -n trident
Defaulted container "trident-main" out of: trident-main, trident-autosupport, csi-provisioner, csi-attacher, csi-resizer, csi-snapshotter
time="2025-06-26T06:12:00Z" level=trace msg=">>>> ReconcileNodeAccess" Method=ReconcileNodeAccess Nodes="[]" Type=ASAStorageDriver requestID=f824e6af-8162-491d-91e2-a81ad8160f19 requestSource=Periodic

[root@scs000711920 demo]# tridentctl -n trident logs
trident-controller log:
time="2025-06-26T03:30:08Z" level=debug msg="Node updated in cache." logLayer=csi_frontend name=scs000757455 requestID=a15aa399-13c5-49c3-9fea-9fab19c0290b requestSource=Kubernetes workflow="node=update"

 

Conclusion

 

With the power of NetApp ASA r2 systems and integration provided by NetApp Trident, provisioning high-performance block storage for your Kubernetes applications has never been easier. Notably, ASA r2 support for NVMe/TCP leveraged by Trident enables you to deliver low-latency, scalable storage to your Kubernetes workloads with exceptional efficiency and flexibility. We hope this article has equipped you with the knowledge and confidence to configure your Kubernetes workloads on ASA r2 systems, taking advantage of NVMe/TCP support. Happy configuring, and may your Kubernetes journey be smooth and successful.

Public