Deep Learning (DL) is the subfield of Artificial Intelligence (AI) that focuses on creating large neural network models capable of data-driven decisio ...read more
In the first post of our series, we explored the AI/ML workflow through the lens of a Medallion Data Architecture. We explained our rationale to ident ...read more
The new SSD capacity decrease capability of FSx for ONTAP Gen-2 file systems, transforms high-performance storage workloads management on AWS, offerin ...read more
I'm excited to kick off a new blog series called Back to Basics (B2B). The goal is to revisit fundamental concepts that often slip through the cracks ...read more
From June through August 2025, NetApp delivered a series of enhancements to Workload Factory that further simplify deployment, improve resilience, and automate storage operations on FSx for ONTAP workloads.
... View more
This article was originally published in March 22 on netapp.io, discussing a specific use case for a single NetApp Trident PVC being used by multiple Kubernetes clusters.
... View more
Adding NetApp Trident CSI integration for Enterprise RAG
OPEA Enterprise RAG now includes NetApp Trident CSI deployment automation. This integration implemented for the Intel Enterprise RAG supports NetApp AIPod Mini deployment with NetApp ONTAP storage. Built on the strong foundation of OPEA, Intel® AI for Enterprise RAG converts enterprise data into actionable insights with key features that enhance scalability, security, and user experience. This integration adds support for NetApp ONTAP storage as a CSI driver option in the Enterprise RAG deployment. With this integration, customers and partners will now be able to use the Enterprise RAG repo as a “one stop shop” for the deployment of NetApp AIPod Mini software stack.
A RAG-ready enterprise system in two commands
Customers can deploy the prerequisite data pipeline and a RAG-ready enterprise system in two commands:
Run playbook to deploy Kubernetes and NetApp Trident.
Run playbook to deploy Intel AI for Enterprise RAG
Validated software stack for RAG
The validated software stack includes:
Production-ready implementation of OPEA ChatQnA
OPEA - free and open-source software: OPEA [Open Platform for Enterprise AI]
Runs on Kubernetes
Validated with Trident (ONTAP NFS)
Ongoing programmatic data ingest from ONTAP S3
What is OPEA?
OPEA is an open platform project that lets you create composable GenAI solutions and streamline the implementation of enterprise-grade Generative AI by efficiently integrating secure and cost-effective Generative AI workflows. It includes a detailed framework of composable building blocks for state-of-the-art generative AI systems including LLMs, data stores, and prompt engines and architectural blueprints for developing retrieval-augmented generation (RAG) systems.
Intel AI for Enterprise RAG architecture with NetApp ONTAP
Deployment automation key features include
Automated Trident operator installation using Helm
ONTAP NAS backend configuration
Dynamic StorageClass creation with ReadWriteMany support
Seamless integration with existing Enterprise RAG deployment workflow
The NetApp Trident CSI deployment automation into the Intel Enterprise RAG project provides customers and partners an enterprise-grade persistent storage capabilities using NetApp ONTAP systems.
To learn more please visit, https://github.com/opea-project/Enterprise-RAG
... View more
You can now orchestrate volumes using AI. NetApp DataOps Toolkit v2.6 includes two new MCP servers for volume orchestration - one for traditional environments, and one for Kubernetes-based environments.
... View more