Tech ONTAP Blogs

Unlock Seamless iSCSI Storage Integration: A Guide to FSxN on ROSA Clusters for iSCSI

banusundhar
NetApp
566 Views

Banu Sundhar, NetApp

Mayur Shetty, Red Hat

 

In a previous blog, I introduced an exciting feature in the Trident 25.02 release that simplifies preparing worker nodes of an OpenShift Cluster for iSCSI workloads. This enhancement eliminates the need for manual preparation, streamlining the process for Kubernetes cluster worker nodes and benefiting users of Red Hat OpenShift Service on AWS (ROSA). With this feature, provisioning persistent volumes  for various workloads, including virtual machines on OpenShift virtualization on bare metal nodes within a ROSA cluster, becomes effortless.

 

In this blog, we will provide a comprehensive guide on installing FSx for NetApp ONTAP (FSxN) on AWS and utilizing it to provision storage for containers and virtual machines running on ROSA clusters. Join us as we walk you through the installation and configuration of Trident 25.02, showcasing how to create container applications and virtual machines on ROSA clusters using iSCSI volumes. Additionally, we will demonstrate that Trident supports RWX access modes for iSCSI volumes in Block mode, enabling live migrations of VMs created with iSCSI storage. Get ready to unlock seamless storage integration and enhance your ROSA cluster deployments!

 

ROSA clusters with FSxN storage

ROSA integrates seamlessly with Amazon FSx for NetApp ONTAP (FSxN), a fully managed, scalable shared storage service built on NetApp's renowned ONTAP file system. With FSxN, customers can leverage key features such as snapshots, FlexClones, cross-region replication with SnapMirror, and a highly available file server that supports seamless failover. The integration with NetApp Trident driver—a dynamic Container Storage Interface (CSI)—facilitates the management of Kubernetes Persistent Volume Claims (PVCs) on storage disks. This driver automates the on-demand provisioning of storage volumes across diverse deployment environments, making it simpler to scale and protect data for your applications. One key benefit of FSxN is that it is a true first party AWS offering just like EBS, meaning customers can retire their committed spend with AWS and get support directly from them as well.

 

Solution overview

This diagram shows the ROSA cluster deployed in multiple AZs. ROSA cluster’s master nodes, infrastructure nodes are in Red Hat’s VPC, while the worker nodes are in a VPC in the customer's account. We’ll create an FSxN file system within the same VPC and install the Trident provisioner in the ROSA cluster, allowing all the subnets of this VPC to connect to the file system.

 

Screenshot 2025-03-05 at 10.06.46 AM.png

 

Prerequisites

 

1.  Verify iSCSI status on the ROSA Cluster nodes (Optional)

Log into a ROSA cluster worker node to view the status of iSCSI daemon and multipathing daemon and the contents of the multipath.conf file. It will indicate that the iSCSI and multipathing is not configured. Without configuring this, the volumes created in ONTAP cannot be mounted by the application pods using Trident. 

The screenshot below shows the commands and the outputs on one worker node. Repeat it on all worker nodes. In this step, Here,  am just illustrating that ROSA worker nodes are not yet prepared for iSCSI workloads.

Screenshot 2025-03-05 at 10.08.10 AM.png

2. Provision FSx for NetApp ONTAP

Create a multi-AZ FSx for NetApp ONTAP in the same VPC as the ROSA cluster.

There are several ways to do this. I am showing the creation of FSxN using a

CloudFormation Stack

 

a. Clone the GitHub  repository
# git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git


b. Run the CloudFormation Stack
Run the command below by replacing the parameter values with your own values:

# cd rosa-fsx-netapp-ontap/fsx
aws cloudformation create-stack \
  --stack-name ROSA-FSXONTAP \
  --template-body file://./FSxONTAP.yaml \
  --region <region-name> \
  --parameters \
  ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \
  ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \
  ParameterKey=myVpc,ParameterValue=[VPC_ID] \
ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \
  ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \
  ParameterKey=ThroughputCapacity,ParameterValue=1024 \
  ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \
  ParameterKey=FsxAdminPassword,ParameterValue=[Define Admin password] \
  ParameterKey=SvmAdminPassword,ParameterValue=[Define SVM password] \
  --capabilities CAPABILITY_NAMED_IAM

Where :
region-name: same as the region where the ROSA cluster is deployed
subnet1_ID : id of the Preferred subnet for FSxN
subnet2_ID: id of the Standby subnet for FSxN
VPC_ID: id of the VPC where the ROSA cluster is deployed
routetable1_ID, routetable2_ID: ids of the route tables associated with the subnets chosen above
your_allowed_CIDR: allowed CIDR range for the FSx for ONTAP security groups ingress rules to control access. You can use 0.0.0.0/0 or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP.

Define Admin password: A password to login to FSxN
Define SVM password: A password to login to SVM that will be created.

 

Verify that your file system and storage virtual machine (SVM) has been created using the Amazon FSx console.

 

3. Install Trident CSI driver for the ROSA cluster

a. Install trident using tridentctl with the node-prep flag. This flag is available only in 25.02 release.

For additional methods of installing Trident, refer to the Trident documentation.

Ensure that all Trident pods are running after the installation is successful.

Screenshot 2025-03-05 at 10.15.30 AM.png

Screenshot 2025-03-05 at 10.16.18 AM.png

Screenshot 2025-03-05 at 10.18.31 AM.png

Now check the worker nodes for the iSCSI and the multipath status. Trident 25.02 installation with the node-prep flag should have started iscsid and multipathd.  Note that multipathing is setup only for NetApp devices as seen in the multipath.conf file.

Screenshot 2025-03-05 at 10.20.31 AM.png

4. Configure the Trident CSI backend to use FSx for NetApp ONTAP (ONTAP SAN for iSCSI)

The Trident back-end configuration tells Trident how to communicate with the storage system (in this case, FSxN). For creating the backend, we will provide the credentials of the Storage Virtual machine to connect to, along with the cluster management lif and the SVM to use for storage provisioning. We will use the ontap-san driver to provision storage volumes in FSxN file system.

 

1. Create the backend object

Create the backend object using the command shown and the following yaml.

 

#cat tbc-fsx-san.yaml
apiVersion: v1
kind: Secret
metadata:
  name: tbc-fsx-san-secret
type: Opaque
stringData:
  username: fsxadmin
  password: <value provided for Define SVM password as a parameter to the Cloud Formation Stack>
---
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
  name: tbc-fsx-san
spec:
  version: 1
  storageDriverName: ontap-san
  managementLIF: <management lif of the file system in AWS>
  backendName: tbc-fsx-san
  svm: <SVM name that is created in the file system>
  defaults:
    storagePrefix: demo
    nameTemplate: "{{ .config.StoragePrefix }}_{{ .volume.Namespace }}_{{ .volume.RequestName }}"
  credentials:
    name: tbc-fsx-san-secret

# oc apply -f tbc-fsx-san.yaml

 

Note:

  1. For the Secret, you can also retrieve the SVM password created for FSxN from the AWS Secrets Manager as shown below

    Screenshot 2025-03-05 at 10.23.34 AM.png

    Screenshot 2025-03-05 at 10.23.09 AM.png

  2. You can get the Management lif and the SVM name from the Amazon FSx Console as shown in the screenshot below

    Screenshot 2025-03-05 at 10.25.20 AM.png

 

2. Verify  backend object has been created and Phase is showing Bound and Status is Success

Screenshot 2025-03-05 at 10.26.19 AM.png

3. Create Storage Class for iSCSI

Now that the Trident backend is configured, you can create a Kubernetes storage class to use the backend. Storage class is a resource object made available to the cluster. It describes and classifies the type of storage that you can request for an application.

 

 

# cat sc-fsx-san.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
  name: sc-fsx-san
provisioner: csi.trident.netapp.io
parameters:
  backendType: "ontap-san"
  media: "ssd"
  provisioningType: "thin"
  fsType: ext4
  snapshots: "true"
  storagePools: "tbc-fsx-san:.*"
allowVolumeExpansion: true

#oc create -f sc-fsx-san.yaml

 

 

4. Verify storage class is created

Screenshot 2025-03-05 at 10.27.54 AM.png

5. Create a Snapshot class in Trident so that CSI snapshots can be taken

 

# cat snapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
  name: trident-snapshotclass
driver: csi.trident.netapp.io
deletionPolicy: Retain

# oc create -f snapshotclass.yaml

 

Screenshot 2025-03-05 at 10.28.48 AM.png

 This completes the installation of Trident CSI driver and its connectivity to FSxN file system using iSCSI.

 

Using ISCSI storage for container apps on ROSA

1. Deploying a Postgresql application using iSCSI storage class

a. Use the following yaml file to deploy postgresql app

 

#cat postgres-san.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: postgres
spec:
  replicas: 1
  selector:
    matchLabels:
      app: postgres
  template:
    metadata:
      labels:
        app: postgres
    spec:
      containers:
      - name: postgres
        image: postgres:14
        env:
        - name: POSTGRES_USER
          #value: "myuser"
          value: "admin"
        - name: POSTGRES_PASSWORD
          #value: "mypassword"
          value: "adminpass"
        - name: POSTGRES_DB
          value: "mydb"
        - name: PGDATA
          value: "/var/lib/postgresql/data/pgdata"
        ports:
        - containerPort: 5432
        volumeMounts:
        - name: postgres-storage
          mountPath: /var/lib/postgresql/data
      volumes:
      - name: postgres-storage
        persistentVolumeClaim:
          claimName: postgres-pvc


---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: postgres-pvc
spec:
  accessModes:
    - ReadWriteOnce
  resources:
    requests:
      storage: 5Gi
  storageClassName: sc-fsx-san


---
apiVersion: v1
kind: Service
metadata:
  name: postgres
spec:
  selector:
    app: postgres
  ports:
  - protocol: TCP
    port: 5432
    targetPort: 5432
  type: ClusterIP

#oc create namespace postgres-san
root@localhost HAFSX]# oc create -f postgres-san.yaml -n postgres-san
deployment.apps/postgres created
persistentvolumeclaim/postgres-pvc created
service/postgres created

 

 

b.  Verify that the application pod is running. 

Verify and a PVC and PV are created for the application. Note that the storage class for the PVC uses the san storage class previously created using iSCSI.

Screenshot 2025-03-05 at 10.33.56 AM.png

c. Verify that iSCSi sessions are created in the node where the pod runs.

Screenshot 2025-03-05 at 10.35.17 AM.png

d. Verify that a lun is created

Verify that a lun is created on the volume in FSxN  for this application and the lun is mapped. You can log into the FSxN CLI using fsxadmin and the password you previously created.

Screenshot 2025-03-05 at 10.35.56 AM.png

Screenshot 2025-03-05 at 10.36.10 AM.png

Using ISCSI storage for VMs on OpenShift Virtualization in ROSA

1. Verify you have baremetal nodes as worker nodes in the cluster.

To be able to create VMs, you need to have bare metal nodes on the ROSA cluster.

Screenshot 2025-03-05 at 10.39.51 AM.png

2. Install OpenShift Virtualization using the Operator

You can install OpenShift Virtualization using the OpenShift Virtualization Operator in the Operator hub. Once it is installed and configured, Virtualization will be populated in the UI of the OpenShift Console.

Screenshot 2025-03-05 at 10.41.21 AM.png

Screenshot 2025-03-05 at 10.41.36 AM.png

3. Deploy a VM using iSCSI storage class

Click on Create VirtualMachine and select From template.

Select Fedora VM. You can choose any OS that has source available.

Screenshot 2025-03-05 at 10.42.27 AM.png

4. Customize the VM

Customize the VM to provide the storage class for the boot disk and create additional disks with the selected storage class.

Click on Customize VirtualMachine.

 

Screenshot 2025-03-05 at 10.43.44 AM.png

5. Click on the Disks tab and click on Edit for the root disk

Screenshot 2025-03-05 at 10.44.29 AM.png

6. Ensure you have selected sc-fsx-san for storage class.

Select Shared Access (RWX) for Access mode and Select Block for Volume Mode. Trident Supports RWX Access mode for iSCSI storage in Volume Mode Block. This setting is a requirement for the PVC of the disks so that you can perform live migration of VMs. Live migration is migration of a VM from one worker node to another for which RWX access mode is required and Trident supports it for iSCSI in Block Volume Mode.

Screenshot 2025-03-05 at 10.45.16 AM.png

Note:

  1. If you check the Apply optimized StorageProfile Settings, then it automatically chooses RWX and Volume mode Block.
  2. If sc-fsx-san was set as the default storage in the cluster, then this storage class will automatically be picked.

8. Add another disk

Click Add disk and select empty disk (since this is just an example) and ensure that sc-fsx-san disk is chosen and Apply optimized StorageProfile settings is checked.

Click Save and the Click Create VirtualMachine. The VM comes to a running state.

Screenshot 2025-03-05 at 10.46.45 AM.png

Screenshot 2025-03-05 at 11.00.37 AM.png

Screenshot 2025-03-05 at 10.47.20 AM.png

9. Check the VM pods, PVCs. Verify that PVCs are created using iSCSI storage class and RWX Access modes.

Screenshot 2025-03-05 at 10.47.48 AM.png

10. Verify that a LUN is created in each volume corresponding to the disk PVCs by logging into the FSxN CLI.

Screenshot 2025-03-05 at 10.48.34 AM.png

 

Conclusion

In this blog, we successfully demonstrated how to integrate FSx for NetApp ONTAP as a shared file system with a ROSA cluster using a Hosted Control Plane, leveraging the NetApp Trident CSI driver for iSCSI storage. We illustrated how Trident release 25.02 streamlines the preparation of worker nodes by configuring iSCSI and multipathing for iSCSI on ONTAP storage. Our comprehensive, step-by-step guide detailed the configuration of the Trident backend and storage class for iSCSI, and how to utilize them to create containers and VMs. We emphasized that the ontap-san driver supports the RWX access mode in Block Volume Mode for iSCSI, making it ideal for VM disks in OpenShift virtualization and enabling live migration of VMs.

 

For further information on Trident, please refer to the NetApp Trident documentation. Additionally, you can find more resources, including detailed guides and videos, in the Red Hat OpenShift with NetApp section under Containers in the NetApp Solutions documentation. To clean up the setup from this post, follow the instructions provided in the GitHub repository.

 

 

Public