Discover how NetApp’s AI Data Guardrails turn governance into a living system—enabling secure, compliant, and scalable AI platforms. From risk managem ...read more
By Mohammad Hossein Hajkazemi, Bhushan Jain, and Arpan Chowdhry
Introduction
Google Cloud NetApp Volumes is a fully managed, cloud-native storage s ...read more
NetApp Console delivers HIPAA (Health Insurance Portability and Accountability Act)- compliant data intelligence without storing ePHI
NetApp Console n ...read more
NetApp Console delivers simplicity with Console agent
NetApp® Console agent is the secure and trusted software from NetApp that enables the workflows ...read more
In today's digital age, ransomware attacks are becoming increasingly sophisticated and prevalent, posing significant threats to data security. To combat these threats, NetApp has developed the ONTAP , a specialized security solution which is the only on-box capability, without any third-party integration, designed to detect and respond to ransomware threats in file share and SAN environments.
... View more
In modern SAP landscapes, efficient user and rights management plays a central role – and this is precisely where Central User Management (CUM) comes in. Combined with NFSv4.1, CUM opens up entirely new possibilities for performance, flexibility, and simplified administration. In this blog, learn how to optimize your SAP workloads, avoid typical name resolution challenges, and fully leverage the advantages of NFSv4.1 – from faster data access to cost-effective infrastructure.
... View more
Introduction
In the dynamic world of Kubernetes, where pods scale and storage demands fluctuate, efficient resource management isn't just a best practice—it's a necessity. Enter NetApp Trident™ , the open-source storage orchestrator that simplifies persistent storage for containerized applications. As a Container Storage Interface (CSI) driver, Trident handles volume provisioning, snapshots, and more across diverse backends like ONTAP, Amazon FSx for NetApp ONTAP (FSxN), Google Cloud NetApp Volumes (GCNV), Azure NetApp Files (ANF). But even the most robust tools like Trident can falter if they're starved for CPU or memory. That's where Kubernetes resource requests and limits come into play, ensuring your storage operations run smoothly without overwhelming your cluster.
In this post, we'll break down the fundamentals of resource requests and limits and explore ways to configure resource requests & limits for Trident controller and node pod containers. Whether you're deploying Trident via Helm or customizing it with the TridentOrchestrator CRD, this feature will boost performance, prevent evictions, and optimize costs. Let's dive in.
Kubernetes Resource Requests and Limits: The Basics
At its core, Kubernetes uses resource requests and limits to allocate and cap CPU and memory for containers/pods. These settings live in your pod specs under the resources field and play a dual role: guiding scheduling decisions and enforcing runtime boundaries.
Requests: Requests are guaranteed resources that Kubernetes ensures for the container/or pod on a node. If a node doesn’t have enough free CPU or memory to meet a pod’s request, the pod will remain in the Pending state. The scheduler only assigns the pod to a node that can guarantee the requested resources, ensuring predictable performance for critical workloads.
Limits: Limits are the maximum resources that can be utilized by a container/pod. For CPU, exceeding limits triggers kernel throttling (slowing down the process). For memory, it risks an Out-Of-Memory (OOM) kill, where the container gets terminated to protect the node.
Here's how they differ by resource type:
Resources
Requests Behavior
Limits Behavior
CPU
It defines the minimum guaranteed CPU resources.
Throttles usage—no termination, but performance dips.
Memory
Reserves space; pods can exceed if free memory exists.
Enforces hard caps via OOM kills when pressure builds.
Configuring them is straightforward. In a Deployment or Pod spec, here is an example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: my-app
spec:
replicas: 3
selector:
matchLabels:
app: my-app
template:
metadata:
labels:
app: my-app
spec:
containers:
- name: app-container
image: nginx:1.21
resources:
requests:
memory: "64Mi"
cpu: "250m"
limits:
memory: "128Mi"
cpu: "500m"
This ensures each pod requests 0.25 CPU and 64Mi memory but won't exceed 0.5 CPU or 128Mi memory. Without limits, your container could theoretically hog the entire node—risky in shared clusters.
Why Resource Management Matters for Trident
Trident deploys as a controller pod (one instance managing storage operations) and node pods (one per worker node for mounting volumes). These pods include sidecar containers like CSI provisioner, attacher, and resizer, which handle dynamic provisioning and attachment.
Poor resource tuning here can lead to:
Scheduling Failures: Under-requested pods might not fit on nodes, delaying storage ops.
Performance Bottlenecks: Overloaded Trident pods throttle during peak I/O, slowing app responsiveness.
Cluster Instability: Unbounded memory use could trigger evictions, disrupting storage for entire workloads.
By setting requests and limits, you guarantee Trident's core functions (e.g. volume provisioning, volume snapshots) while protecting against rogue consumption—crucial for stateful apps relying on persistent volumes.
Default Resource Configurations in Trident
Trident applies sensible default resource requests values for both controller and node pod containers, while intentionally omitting default CPU and memory limits.
Trident’s actual resource consumption varies significantly depending on the size and scale of worker nodes and number of persistent volumes. Setting restrictive default limits could easily degrade performance or cause pod evictions in typical production deployments. As a result, Trident ships without default limits out of the box, giving operators maximum flexibility.
When running in resource-constrained or multi-tenant environments where strict containment is required, administrators can (and should) add explicit CPU and memory limits to the Trident controller and node pods to match their specific policies and workload characteristics. Controller Pod Defaults
Container
CPU Request
Memory Request
CPU Limit
Memory Limit
trident-main
10m
80Mi
None
None
csi-provisioner
2m
20Mi
None
None
csi-attacher
2m
20Mi
None
None
csi-resizer
3m
20Mi
None
None
csi-snapshotter
2m
20Mi
None
None
trident-autosupport
1m
30Mi
None
None
Node Pod Defaults (Linux)
Container
CPU Request
Memory Request
CPU Limit
Memory Limit
trident-main
10m
60Mi
None
None
node-driver-registrar
1m
10Mi
None
None
Node Pod Defaults (Windows)
Container
CPU Request
Memory Request
CPU Limit
Memory Limit
trident-main
10m
60Mi
None
None
node-driver-registrar
6m
40Mi
None
None
liveness-probe
2m
40Mi
None
None
Prerequisite to use this feature.
Netapp Trident 25.10 or later
Installing Trident with specific Resource Requests and Limits values
To configure Kubernetes resource requests and limits for individual containers in Trident controller and node pods, users can specify CPU and memory constraints in two ways:
Operator-based deployment: User can specify CPU and memory constraints directly in the TridentOrchestrator (TORC) Custom Resource Definition using the `spec.resources` field. This allows for declarative, GitOps-friendly configuration with granular control over each container's resource requirements.
Helm-based deployment: User can configure controller, node and operator pod containers resources either through the Helm chart's values.yaml file or using the --set flag in the helm command, during installation or upgrade.
Both methods support granular container-level configuration for all Trident containers including trident-main, CSI sidecars (provisioner, attacher, resizer, snapshotter, registrar), and trident-autosupport, across both Linux and Windows node platforms. This enables better resource planning, and prevention of resource contention. Note: Always preserve exact container names and YAML indentation to avoid parse errors.
Examples:
Helm way of installation: Refer Deploy Trident operator using Helm for installation steps and specify resource request and limit values in values.yaml or use --set to specify resource request and limit values. Below is the Sample values.yaml file that you can refer as a example.
resources:
controller:
trident-main:
requests:
cpu: 50m
memory: 128Mi
limits:
cpu: 100m
memory: 256Mi
# sidecars
csi-provisioner:
requests:
cpu: 20m
memory: 50Mi
limits:
cpu: 40m
memory: 100Mi
csi-attacher:
requests:
cpu: 20m
memory: 25Mi
limits:
cpu: 80m
memory: 70Mi
csi-resizer:
requests:
cpu: 40m
memory: 30Mi
limits:
cpu: 90m
memory: 75Mi
csi-snapshotter:
requests:
cpu: 30m
memory: 22Mi
limits:
cpu: 85m
memory: 68Mi
trident-autosupport:
requests:
cpu: 20m
memory: 35Mi
limits:
cpu: 60m
memory: 130Mi
node:
linux:
trident-main:
requests:
cpu: 75m
memory: 100Mi
limits:
cpu: 150m
memory: 200Mi
# sidecars
node-driver-registrar:
requests:
cpu: 20m
memory: 15Mi
limits:
cpu: 55m
memory: 35Mi
windows:
trident-main:
requests:
cpu: 8m
memory: 45Mi
limits:
cpu: 180m
memory: 140Mi
# sidecars
node-driver-registrar:
requests:
cpu: 30m
memory: 45Mi
limits:
cpu: 90m
memory: 135Mi
liveness-probe:
requests:
cpu: 20m
memory: 45Mi
limits:
cpu: 55m
memory: 70Mi
operator:
requests:
cpu: 20m
memory: 60Mi
limits:
cpu: 40m
memory: 120Mi
Operator way of installation: Refer Customise Trident operator installation to install trident and specify resource values in TridentOrchestrator CRD as per below example: apiVersion: trident.netapp.io/v1
kind: TridentOrchestrator
metadata:
name: trident
spec:
debug: true
namespace: trident
windows: true
resources:
controller:
trident-main:
requests:
cpu: 10m
memory: 80Mi
limits:
cpu: 200m
memory: 256Mi
# sidecars
csi-provisioner:
requests:
cpu: 20m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
csi-attacher:
requests:
cpu: 50m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
csi-resizer:
requests:
cpu: 30m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
csi-snapshotter:
requests:
cpu: 25m
memory: 20Mi
limits:
cpu: 100m
memory: 64Mi
trident-autosupport:
requests:
cpu: 10m
memory: 30Mi
limits:
cpu: 50m
memory: 128Mi
node:
linux:
trident-main:
requests:
cpu: 10m
memory: 60Mi
limits:
cpu: 200m
memory: 256Mi
# sidecars
node-driver-registrar:
requests:
cpu: 10m
memory: 10Mi
limits:
cpu: 50m
memory: 32Mi
windows:
trident-main:
requests:
cpu: 60m
memory: 40Mi
limits:
cpu: 200m
memory: 128Mi
# sidecars
node-driver-registrar:
requests:
cpu: 40m
memory: 40Mi
limits:
cpu: 100m
memory: 128Mi
liveness-probe:
requests:
cpu: 20m
memory: 40Mi
limits:
cpu: 50m
memory: 64Mi Important Disclaimer: The resource values shown in the examples above are for demonstration purposes only. They are not official recommendations from NetApp. Actual resource requirements vary significantly based on cluster size, number of persistent volumes, I/O workload intensity, backend type, and whether features like snapshots or autosupport are heavily used. Always monitor your Trident pods in a test or staging environment using kubectl top, Prometheus, or similar tools, and adjust requests and limits to match your observed real-world usage.
Post-deployment, you can verify resource values with kubectl describe torc trident -n trident and check status.currentInstallationParams.resources.
Updating Resource Requests and Limits Post-Installation
User might find the need to adjust CPU and memory requests or limits values for controller or node pod containers. The method to update these resources depends on whether you originally installed Trident using the Trident Operator (TridentOrchestrator CR) or Helm.
If Trident is installed using Trident Operator i.e. via the TridentOrchestrator (TORC) CRD, user can update resource values by running kubectl edit command and then modifying spec.resources section with new values or through kubectl patch command.
For installations done through Helm, Use the helm upgrade command with updated values for CPU and memory in your Helm chart’s values file or directly by using --set flag.
Note: Keep in mind that changing resource requests or limits will trigger a rolling restart of the Trident controller and node pods.
Best Practices and Common Pitfalls
Drawing from Kubernetes wisdom, here's how to get it right for Trident: Do's
Monitor and Iterate: Use tools like Prometheus or kubectl top to profile usage, then adjust values based on that.
Test in Staging: Simulate loads to validate limits without OOM kills.
Don'ts
Over-Request: This wastes capacity; Trident is lightweight, so avoid inflating beyond observed needs.
Wrapping Up: Tune, Deploy, Prosper
Resource requests and limits are your Kubernetes Swiss Army knife for Trident, turning potential chaos into controlled efficiency. By understanding the basics, leveraging defaults, and customizing thoughtfully, you'll ensure seamless storage orchestration even under load.
Ready to optimize? Grab your Trident Helm chart or CRD, tweak those resources, and deploy. Your cluster—and your apps—will thank you.
For more on Kubernetes resources, check the K8s official docs. Trident setup details are in NetApp Trident's guides.
... View more
As organizations are adopting Kubernetes at scale, simplicity has become one of the key design principles. Users want to do more with their limited time and resources. This is where the NetApp ® ASA r2 systems help users: by simplifying their storage footprint for Kubernetes applications.
NetApp Trident ™ software, our Container Storage Interface (CSI)–compliant storage orchestrator, already supports these systems using the iSCSI and NVMe/TCP protocol. With Trident 25.10, support has been extended for FCP Protocol. This support can provide further scalability and performance benefits of FCP with ASA r2 systems.
In this article, we’ll explore the cutting-edge features of the ASA r2 systems, demonstrate how to seamlessly integrate them with your Kubernetes workloads, and explore how to use them in Trident with FCP protocol. So, let's dive in.
Introducing the ASA r2 systems
Before we begin, let’s look at what ASA r2 systems are and, at a high level, what they offer. The NetApp ASA r2 systems, which include the NetApp ASA A1K, ASA A90, ASA A70, ASA A50, ASA A30, and ASA A20 models, offer a unified hardware and software solution tailored to the specific needs of SAN-only customers. Built on the new NetApp ONTAP ® design center architecture, these systems introduce a revolutionary approach to storage provisioning, which makes them an ideal choice for modern Kubernetes workloads.
One of the key differentiators of the ASA r2 systems is their disaggregated architecture. Unlike traditional systems, they don’t expose the concept of aggregates to the user. Instead, these systems treat LUN (logical unit number) as a first-class citizen, eliminating the need for wrapping LUNs inside volumes. This innovative design enables a simplified and streamlined experience for managing block storage.
NetApp Trident and CSI integration
To enable integration with ASA r2 systems, NetApp Trident™ software, NetApp's Container Storage Interface (CSI)–compliant storage orchestrator, now supports provisioning block storage using the FCP protocol starting with the 25.10 release. The best part is that if you’re an existing Trident customer, you can seamlessly transition to using ASA r2 systems with minimal changes. All you need to do is provide the right credentials in the Trident backend, and everything else is dynamically determined
Prerequisites
Netapp Trident 25.10 or latest.
Configure zoning on the FC switch using WWPNs of the Host and target.
Refer to the respecive switch vendor documentation for information.
Refer to the following ONTAP documentation for details:
Fibre Channel and FCoE zoning overview
Ways to configure FC & FC-NVMe SAN hosts
Provisioning Steps:
Whether or not you’re a new Trident user, the process of provisioning storage with Trident is straightforward. Here are the simple steps.
Step 1: Create a Trident backend. Start by creating a Trident backend with the storage driver ontap-san. You can do this either by configuring the TridentBackendConfig (TBC) custom resource definition using the Kubernetes-native approach or by using a custom JSON file with tridentctl, a command-line utility for managing Trident. You can either use a cluster management IP with administrator credentials or specify a specific storage virtual machine (SVM) with its management IP and credentials. For more information, refer to ONTAP SAN driver details in the Trident documentation.
# Kubernetes secret required for creating Trident backend from TBC
[core@cp1 trident-installer]$ cat secret.yaml
apiVersion: v1
kind: Secret
metadata:
name: asa-r2-fcp-secret
type: Opaque
stringData:
username: <username>
password: <password>
[core@cp1 trident-installer]$ kubectl create -f secret.yaml -n trident
secret/asa-r2-fcp-secret created
[core@cp1 trident-installer]$ kubectl get secret asa-r2-fcp-secret -n trident
NAME TYPE DATA AGE
asa-r2-fcp-secret Opaque 2 18s
# Kubernetes CR TridentBackendConfig (TBC)
[core@cp1 trident-installer]$ cat tbc.yaml
apiVersion: trident.netapp.io/v1
kind: TridentBackendConfig
metadata:
name: asa-r2-fcp-backend-tbc
spec:
version: 1
backendName: asa-r2-fcp-backend
storageDriverName: ontap-san
managementLIF: 1.2.3.4
svm: svm0
sanType: fcp
credentials:
name: asa-r2-fcp-secret
# Or, Trident backend json
[core@cp1 trident-installer]$ cat backend.json
{
"version": 1,
"storageDriverName": "ontap-san",
"managementLIF": "1.2.3.4",
"backendName": "asa-r2-fcp-backend"
"svm": "svm0",
"username": "<username>",
"password": "<password>",
"sanType": "fcp"
}
Step 2: Add the backend. Once the backend is configured, you can add it to Trident using either kubectl or tridentctl. These tools provide a convenient way to add the newly configured backend to Trident and make it available for use.
# Create Trident Backend via kubectl
[core@cp1 trident-installer]$ kubectl create -f tbc.yaml -n trident
tridentbackendconfig.trident.netapp.io/asa-r2-fcp-backend-tbc created
[core@cp1 trident-installer]$ kubectl get tbc asa-r2-fcp-backend-tbc -n trident
NAME BACKEND NAME BACKEND UUID PHASE STATUS
asa-r2-fcp-backend-tbc asa-r2-fcp-backend 36f3227c-bdbb-4052-94f2-79123a004990 Bound Success
[core@cp1 trident-installer]$ ./tridentctl -n trident get b
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| asa-r2-fcp-backend | ontap-san | 36f3227c-bdbb-4052-94f2-79123a004990 | online | normal | 0 |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
# Or, create Trident Backend via tridentctl
[core@cp1 trident-installer]$ tridentctl create b -f backend.json -n trident
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| NAME | STORAGE DRIVER | UUID | STATE | USER-STATE | VOLUMES |
+--------------------+----------------+--------------------------------------+--------+------------+---------+
| asa-r2-fcp-backend | ontap-san | 82141337e-e35c-6dex-2017-0h2282614f7 | online | normal | 0|
+--------------------+----------------+--------------------------------------+--------+------------+---------+
Step 3: Define a storage class. Create a storage class that corresponds to the type of storage driver you require. This step allows you to define the characteristics of the storage you want to dynamically provision. [core@cp1 trident-installer]$ cat sc.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: asa-r2-fcp-sc
parameters:
backendType: ontap-san
storagePools: "asa-r2-fcp-backend:.*"
provisioner: csi.trident.netapp.io
[core@cp1 trident-installer]$ kubectl create -f sc.yaml
storageclass.storage.k8s.io/asa-r2-fcp-sc created
[core@cp1 trident-installer]$ kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
asa-r2-fcp-sc csi.trident.netapp.io Delete Immediate false 7s Now that everything is ready for dynamic provisioning of storage on the ASA r2 system, let's see an example of how to use it.
Step 4: Create a PVC. Define a PersistentVolumeClaim (PVC) that specifies the amount of storage you need and references the appropriate storage class. This step ensures that your Kubernetes application has access to the required block storage. [core@cp1 trident-installer]$ cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: asa-r2-fcp-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: asa-r2-fcp-sc
[core@cp1 trident-installer]$ kubectl create -f pvc.yaml
persistentvolumeclaim/asa-r2-fcp-pvc created
Step 5: Confirm PVC binding. After the PVC is created, verify that it’s successfully bound to a persistent volume (PV). This confirmation ensures that the block storage is ready for use by your applications.
[core@cp1 trident-installer]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
asa-r2-fcp-pvc Bound pvc-0794ae01-b0ae-4135-801e-2417437e9d7f 1Gi RWO asa-r2-fcp-sc <unset> 14s
Step 6: Use the PVC. Congratulations! You’re now ready to use the PVC in any pod of your choice. Mount the PVC in your pod's specification, and your application will have seamless access to the high-performance block storage provided by the ASA r2 systems using FCP protocol.
[core@cp1 trident-installer]$ cat pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: asa-r2-fcp-pod
spec:
containers:
- image: nginx:alpine
name: nginx
volumeMounts:
- mountPath: mnt/pvc
name: local-storage
nodeSelector:
kubernetes.io/arch: amd64
kubernetes.io/os: linux
volumes:
- name: local-storage
persistentVolumeClaim:
claimName: asa-r2-fcp-pvc
[core@cp1 trident-installer]$ kubectl create -f pod.yaml
pod/asa-r2-fcp-pod created
[core@cp1 trident-installer]$ kubectl get po
NAME READY STATUS RESTARTS AGE
asa-r2-fcp-pod 1/1 Running 0 8s
[core@cp1 trident-installer]$ kubectl exec -it asa-r2-fcp-pod -- /bin/ash -c "mount | fgrep mnt/pvc"
/dev/mapper/3600a09803831504c315d58684978384c on /mnt/pvc type ext4 (rw,seclabel,relatime,stripe=16)
Troubleshooting made easy
If you encounter any issues, we've got you covered with some handy commands for troubleshooting. Use kubectl describe on the problematic resource to gather detailed information. For more insights into the system's behavior, you can check the Trident logs by using kubectl or tridentctl.
[core@cp1 trident-installer]$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS VOLUMEATTRIBUTESCLASS AGE
asa-r2-fcp-pvc Bound pvc-0794ae01-b0ae-4135-801e-2417437e9d7f 1Gi RWO asa-r2-fcp-sc <unset> 2m55s
[core@cp1 trident-installer]$ kubectl describe pvc asa-r2-fcp-pvc
Name: asa-r2-fcp-pvc
Namespace: default
StorageClass: asa-r2-fcp-sc
Status: Bound
Volume: pvc-0794ae01-b0ae-4135-801e-2417437e9d7f
Labels: <none>
Annotations: pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
volume.beta.kubernetes.io/storage-provisioner: csi.trident.netapp.io
volume.kubernetes.io/storage-provisioner: csi.trident.netapp.io
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 1Gi
Access Modes: RWO
VolumeMode: Filesystem
Used By: asa-r2-fcp-pod
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Provisioning 3m11s csi.trident.netapp.io_trident-controller-84d7f798fc-kv8xc_9c6963aa-60f5-43a4-b25c-b8cca37b6171 External provisioner is provisioning volume for claim "default/asa-r2-fcp-pvc"
Normal ExternalProvisioning 3m11s persistentvolume-controller Waiting for a volume to be created either by the external provisioner 'csi.trident.netapp.io' or manually by the system administrator. If volume creation is delayed, please verify that the provisioner is running and correctly registered.
Normal ProvisioningSuccess 3m6s csi.trident.netapp.io provisioned a volume
Normal ProvisioningSucceeded 3m6s csi.trident.netapp.io_trident-controller-84d7f798fc-kv8xc_9c6963aa-60f5-43a4-b25c-b8cca37b6171 Successfully provisioned volume pvc-0794ae01-b0ae-4135-801e-2417437e9d7f
[core@cp1 trident-installer]$
#For more troubleshooting check controller logs using: kubectl logs <trident-controller> -n trident
Conclusion
With the power of NetApp ASA r2 systems and integration provided by NetApp Trident, provisioning high-performance block storage for your Kubernetes applications has never been easier. Notably, ASA r2’s support for the FCP protocol leveraged by Trident enables you to deliver low-latency, scalable storage to your Kubernetes workloads with exceptional efficiency and flexibility. We hope this article has equipped you with the knowledge and confidence to configure your Kubernetes workloads on ASA r2 systems, taking advantage of FCP support. Happy configuring, and may your Kubernetes journey be smooth and successful.
... View more
Businesses are looking to use advancements in AI and Analytics to generate new benefits for their customers, improving customer experiences and operational efficiency. To do this, they often use AWS cloud-based services such as Amazon Bedrock, Amazon SageMaker, and Amazon Athena to train new models, create data lakes and generate generative AI based search and analytics. However, these AWS services are designed to natively integrate with data sourced from Amazon Simple Storage Service (Amazon S3) and currently cannot directly access file data.
Today, all this has changed. Amazon FSx for NetApp ONTAP now supports S3 data access to NFS and SMB file systems, enabling their seamless integration with dozens of S3-based AWS services such as Amazon Bedrock, SageMaker, Athena, AWS Glue, and many more. Customers can now connect AWS services to all their data, be it stored in file, block, or object storage, on premises or in the cloud.
... View more