Tech ONTAP Blogs
Tech ONTAP Blogs
Banu Sundhar, NetApp
a. Clone the GitHub repository
# git clone https://github.com/aws-samples/rosa-fsx-netapp-ontap.git
b. Run the CloudFormation Stack
Run the command below by replacing the parameter values with your own values:
# cd rosa-fsx-netapp-ontap/fsx
aws cloudformation create-stack \
--stack-name ROSA-FSXONTAP \
--template-body file://./FSxONTAP.yaml \
--region <region-name> \
--parameters \
ParameterKey=Subnet1ID,ParameterValue=[subnet1_ID] \
ParameterKey=Subnet2ID,ParameterValue=[subnet2_ID] \
ParameterKey=myVpc,ParameterValue=[VPC_ID] \
ParameterKey=FSxONTAPRouteTable,ParameterValue=[routetable1_ID,routetable2_ID] \
ParameterKey=FileSystemName,ParameterValue=ROSA-myFSxONTAP \
ParameterKey=ThroughputCapacity,ParameterValue=1024 \
ParameterKey=FSxAllowedCIDR,ParameterValue=[your_allowed_CIDR] \
ParameterKey=FsxAdminPassword,ParameterValue=[Define Admin password] \
ParameterKey=SvmAdminPassword,ParameterValue=[Define SVM password] \
--capabilities CAPABILITY_NAMED_IAM
Where :
region-name: same as the region where the ROSA cluster is deployed
subnet1_ID : id of the Preferred subnet for FSxN
subnet2_ID: id of the Standby subnet for FSxN
VPC_ID: id of the VPC where the ROSA cluster is deployed
routetable1_ID, routetable2_ID: ids of the route tables associated with the subnets chosen above
your_allowed_CIDR: allowed CIDR range for the FSx for ONTAP security groups ingress rules to control access. You can use 0.0.0.0/0 or any appropriate CIDR to allow all traffic to access the specific ports of FSx for ONTAP.
Define Admin password: A password to login to FSxN
Define SVM password: A password to login to SVM that will be created.
#cat tbc-fsx-san.yaml
apiVersion: v1
kind: Secret
metadata:
name: tbc-fsx-san-secret
type: Opaque
stringData:
username: fsxadmin
password: <value provided for Define SVM password as a parameter to the Cloud Formation Stack>
---
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: tbc-fsx-san
spec:
version: 1
storageDriverName: ontap-san
managementLIF: <management lif of the file system in AWS>
backendName: tbc-fsx-san
svm: <SVM name that is created in the file system>
defaults:
storagePrefix: demo
nameTemplate: "{{ .config.StoragePrefix }}_{{ .volume.Namespace }}_{{ .volume.RequestName }}"
credentials:
name: tbc-fsx-san-secret
# oc apply -f tbc-fsx-san.yaml
Now that the Trident backend is configured, you can create a Kubernetes storage class to use the backend. Storage class is a resource object made available to the cluster. It describes and classifies the type of storage that you can request for an application.
# cat sc-fsx-san.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: sc-fsx-san
provisioner: csi.trident.netapp.io
parameters:
backendType: "ontap-san"
media: "ssd"
provisioningType: "thin"
fsType: ext4
snapshots: "true"
storagePools: "tbc-fsx-san:.*"
allowVolumeExpansion: true
#oc create -f sc-fsx-san.yaml
# cat snapshotclass.yaml
apiVersion: snapshot.storage.k8s.io/v1
kind: VolumeSnapshotClass
metadata:
name: trident-snapshotclass
driver: csi.trident.netapp.io
deletionPolicy: Retain
# oc create -f snapshotclass.yaml
This completes the installation of Trident CSI driver and its connectivity to FSxN file system using iSCSI.
#cat postgres-san.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: postgres
spec:
replicas: 1
selector:
matchLabels:
app: postgres
template:
metadata:
labels:
app: postgres
spec:
containers:
- name: postgres
image: postgres:14
env:
- name: POSTGRES_USER
#value: "myuser"
value: "admin"
- name: POSTGRES_PASSWORD
#value: "mypassword"
value: "adminpass"
- name: POSTGRES_DB
value: "mydb"
- name: PGDATA
value: "/var/lib/postgresql/data/pgdata"
ports:
- containerPort: 5432
volumeMounts:
- name: postgres-storage
mountPath: /var/lib/postgresql/data
volumes:
- name: postgres-storage
persistentVolumeClaim:
claimName: postgres-pvc
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: postgres-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: sc-fsx-san
---
apiVersion: v1
kind: Service
metadata:
name: postgres
spec:
selector:
app: postgres
ports:
- protocol: TCP
port: 5432
targetPort: 5432
type: ClusterIP
#oc create namespace postgres-san
root@localhost HAFSX]# oc create -f postgres-san.yaml -n postgres-san
deployment.apps/postgres created
persistentvolumeclaim/postgres-pvc created
service/postgres created
Verify and a PVC and PV are created for the application. Note that the storage class for the PVC uses the san storage class previously created using iSCSI.
Verify that a lun is created on the volume in FSxN for this application and the lun is mapped. You can log into the FSxN CLI using fsxadmin and the password you previously created.
To be able to create VMs, you need to have bare metal nodes on the ROSA cluster.
You can install OpenShift Virtualization using the OpenShift Virtualization Operator in the Operator hub. Once it is installed and configured, Virtualization will be populated in the UI of the OpenShift Console.
Click on Create VirtualMachine and select From template.
Select Fedora VM. You can choose any OS that has source available.
Customize the VM to provide the storage class for the boot disk and create additional disks with the selected storage class.
Click on Customize VirtualMachine.
Select Shared Access (RWX) for Access mode and Select Block for Volume Mode. Trident Supports RWX Access mode for iSCSI storage in Volume Mode Block. This setting is a requirement for the PVC of the disks so that you can perform live migration of VMs. Live migration is migration of a VM from one worker node to another for which RWX access mode is required and Trident supports it for iSCSI in Block Volume Mode.
Note:
Click Add disk and select empty disk (since this is just an example) and ensure that sc-fsx-san disk is chosen and Apply optimized StorageProfile settings is checked.
Click Save and the Click Create VirtualMachine. The VM comes to a running state.
In this blog, we successfully demonstrated how to integrate FSx for NetApp ONTAP as a shared file system with a ROSA cluster using a Hosted Control Plane, leveraging the NetApp Trident CSI driver for iSCSI storage. We illustrated how Trident release 25.02 streamlines the preparation of worker nodes by configuring iSCSI and multipathing for iSCSI on ONTAP storage. Our comprehensive, step-by-step guide detailed the configuration of the Trident backend and storage class for iSCSI, and how to utilize them to create containers and VMs. We emphasized that the ontap-san driver supports the RWX access mode in Block Volume Mode for iSCSI, making it ideal for VM disks in OpenShift virtualization and enabling live migration of VMs.
For further information on Trident, please refer to the NetApp Trident documentation. Additionally, you can find more resources, including detailed guides and videos, in the Red Hat OpenShift with NetApp section under Containers in the NetApp Solutions documentation. To clean up the setup from this post, follow the instructions provided in the GitHub repository.