🚀 New in NetApp Console: Smarter Licenses, Subscriptions & Billing Preferences
We’re excited to share some big news from the NetApp Console team! 🎉M ...read more
Discover how NetApp’s AI Data Guardrails turn governance into a living system—enabling secure, compliant, and scalable AI platforms. From risk managem ...read more
By Mohammad Hossein Hajkazemi, Bhushan Jain, and Arpan Chowdhry
Introduction
Google Cloud NetApp Volumes is a fully managed, cloud-native storage s ...read more
NetApp Console delivers HIPAA (Health Insurance Portability and Accountability Act)- compliant data intelligence without storing ePHI
NetApp Console n ...read more
As enterprises move more workloads to the cloud, the stakes for availability, data protection, and data security have never been higher. Industries like financial services, healthcare, government, retail, and global enterprises now demand cloud architectures that can withstand disruptions, eliminate downtime, and meet strict regulatory requirements all without adding complexity.
But one of the hardest challenges in cloud architecture hasn’t changed: how to balance high availability, consistent performance, and in-region data residency requirements without creating a maze of operational overhead. Traditional approaches to zone redundancy often force teams to assemble complex architectures with duplicate volumes, custom failover scripts, or cross-zone replication that adds latency and cost. These solutions can work but they are rarely simple, efficient, or aligned with the expectation that cloud services should “just work.”
That’s what makes Azure NetApp Files’ Elastic zone-redundant storage service level (Elastic ZRS) such a meaningful advancement. Now in public preview, Elastic ZRS is a new service (with different capabilities from the existing ANF service levels) that delivers cloud native, multi‑AZ resilience with shared QoS at the pool level, maintaining low single‑digit millisecond latency and ensuring zero RPO for cross‑zone high availability without requiring complex cloud storage manipulation or compromising overall performance.
Why Customers Need a Better Option
Cloud architects consistently point to a set of recurring challenges when trying to build resilient applications within a single Azure region.
Eliminating Single Zone Vulnerabilities - Applications running in a single availability zone remain exposed if that zone experiences a disruption. Even rare incidents can cascade into major business impacts. Customers need built‑in, infrastructure‑level resilience that ensures a zonal failure doesn’t result in downtime or data loss.
Reducing the Complexity of DIY High Availability - Managing redundant infrastructure, coordinating replication, and maintaining failover scripts consumes valuable engineering time. It increases operational overhead and slows innovation. Customers want simple, built‑in resilience not another custom solution they have to maintain.
Minimizing Downtime and Its Business Impact - For many organizations, even a few minutes of downtime carry real financial risk from missed transactions and customer churn to reputational damage. Teams need a solution designed for virtually uninterrupted operations.
Introducing Elastic ZRS: A New Approach to High Availability
Azure NetApp Files Elastic ZRS tackles these challenges head on by delivering a fully integrated, multi-AZ high availability storage experience designed for modern cloud environments.
High Resiliency - Elastic ZRS synchronously mirrors every write across multiple availability zones within the region committing data everywhere before acknowledging the operation. This architecture delivers true zero data loss (zero RPO). If a zonal outage occurs, failover is completely automatic and transparent, requiring no changes to your applications.
Near Zero Data Loss Failover—With Zero Complexity - Because the platform handles failover natively, your storage endpoint never changes. Applications and stateful Kubernetes workloads continue running without interruption. There’s no need for manual scripts, custom failover logic, or operational overhead.
Enterprise Data Management - Elastic ZRS supports the same enterprise grade features: NFSv3, NFSv4.1, SMB, snapshots, cloning, encryption, and Azure Backup integration as the other ANF service levels.
Optimized Throughput and Metadata Speed - By intelligently allocating QoS at the pool level and optimizing metadata performance across zones, Elastic ZRS supports metadata intensive workloads such as applications managing large numbers of small files.
Where Elastic ZRS Delivers Immediate Benefits
Corporate File Shares - User directories, team shares, and enterprise content remain accessible without interruption ensuring productivity doesn’t take a hit.
Financial Services and Trading Systems - With strict requirements around uptime, compliance, and consistency, financial platforms benefit from nonstop operations and zero‑loss resilience.
Regulated and Business Critical Workloads - Elastic ZRS ensures nonstop access to essential applications like compliance apps even if an AZ fails as data stays online with zero outages or lost transactions.
Kubernetes and Containerized Applications - Stateful apps gain synchronized data protection and automated zone failover critical for modern cloud native environments.
A Simpler, More Resilient Future
Azure NetApp Files Elastic ZRS marks a major step forward in how organizations can build resilient, in-region cloud architectures. It blends Azure’s robust infrastructure with NetApp’s proven enterprise storage capabilities to provide always-on, multi-AZ resiliency that’s simple, fast, and efficient.
For leaders looking to modernize mission critical platforms while reducing operational overhead, Elastic ZRS offers a clear path forward: unified resilience, predictable performance, and turnkey operations built for mission critical workloads platforms while reducing operational overhead.
Explore more:
Blog: Enhanced storage resiliency with Azure NetApp Files Elastic zone-redundant service
Understand Azure NetApp Files Elastic zone-redundant storage service level
Quick Bytes: Azure NetApp Files Elastic zone-redundant storage service level
How-to: Azure NetApp Files Elastic zone-redundant storage service level
... View more
Automating StorageGRID with Ansible makes it easy to run common workflows quickly, consistently, and at scale. Tasks that would normally require clicking through the Grid Manager UI can be codified, repeated, and version-controlled. However, automation quickly runs into a common problem: many workflows require information that isn’t known ahead of time.
IDs for storage pools, tenants, users, ILM components, and other resources are generated per grid and differ between environments. Hard-coding these values defeats the purpose of automation and makes playbooks brittle, environment-specific, and difficult to reuse.
This is where the StorageGRID information modules—na_sg_grid_info and na_sg_org_info—become essential. These modules allow Ansible to dynamically query the Grid Manager or Tenant Manager APIs, gather the current state of the system, and expose that information for use in subsequent tasks. The grid module collects information at the Grid Manager level, while the org module focuses on tenant-level data.
Gathering this information is straightforward, but effectively using it is often less obvious. The returned data is structured JSON, and knowing how to reference the right fields is key to building flexible, reusable playbooks. The examples below walk through how to collect grid information, inspect the returned data, and extract specific values—such as storage pool IDs—that are commonly required for tasks like building ILM rules and policies.
Lets look at a simple playbook that runs the ‘grid’ information collection.
---
- name: Data Look up
hosts: localhost
gather_facts: false
vars:
grid_user: root
grid_password: <password>
grid_address: https://<yourgrid>
tasks:
- name: generate auth token for grid module on StorageGRID
netapp.storagegrid.na_sg_grid_login:
hostname: "{{ grid_address }}"
username: "{{ grid_user }}"
password: "{{ grid_password }}"
validate_certs: false
register: auth
- name: Gather GRID info
netapp.storagegrid.na_sg_grid_info:
api_url: "{{ grid_address }}"
auth_token: "{{ auth.na_sa_token }}"
validate_certs: false
register: sg_info
If you need a reference on how to use the grid_login module see my last post on this topic - StorageGRID Automation: A Resolution You Can Keep - NetApp Community
Using the ‘register’ line at the bottom saves all the returned information to a variable called ‘sg_pool_info’. With this any of the information can be accessed and used. The format for would look like this.
“{{ sg_info.sg_info[‘grid/<subset>’] }}”
Here ‘sg_info’ is a dictionary that is keyed by the different subsets
Subset can be any of the following.
accounts alarms audit compliance-global config config/management config/product-version deactivated-features dns-servers domain-names ec-profiles expansion expansion/nodes expansion/sites firewall-blocked-ports firewall-external-ports firewall-privileged-ips gateway_configs grid-networks groups ha-groups health health/topology identity-source ilm-criteria ilm-grade-site ilm-grades ilm-policies ilm-pools ilm-rules license management-certificate network-topology ntp-servers recovery recovery/available-nodes regions schemes single-sign-on snmp storage-api-certificate untrusted-client-network users users/root versions vlan-interfaces
As you can see a vast amount of information is available. Gathering each of these takes time, and usually will be unused data. For this reason the ‘gather_subset’ option exits. For example when making ILM rules with Ansible, the ID of the storage pool is required and this data is in the subset ilm-pools. We can gather just the ilm-pool information like this.
- name: Gather pool ID
netapp.storagegrid.na_sg_grid_info:
api_url: "{{ grid_address }}"
auth_token: "{{ auth.na_sa_token }}"
validate_certs: false
gather_subset:
- grid_ilm_storage_pools_info
register: sg_pool_info
I also changed the variable name that I register the information as to make it easier to track.
Getting specific information requires understanding the json format of the subset. For example when making ILM rules with Ansible, the ID of the storage pool is required and this data is in the subset ilm-pools.
On an example grid this information looks like this.
"grid/ilm-pools": {
"apiVersion": "4.2",
"data": [
{
"archives": [],
"disks": [
{
"grade": null,
"group": 10,
"siteId": "bbf88845-a108-4210-8869-6c33465eccfc"
}
],
"displayName": "gdl",
"id": "p9829506629192568060",
"name": "gdl"
}
],
"responseTime": "2026-01-23T16:20:39.930Z",
"status": "success",
"status_code": 200
},
This is an easy example to use because there is only one storage pool. To reference the ID of this pool the variable would look like this.
"{{ sg_pool_info.sg_info['grid/ilm-pools'].data[0].id }}"
This example assumes a grid with a single storage pool. In environments with multiple pools, you would filter the data list based on name or siteId.
So that’s the grid/ilm-pools subset, in that the ‘data’ section and the [0] means the first entry since unix counts from 0 instead of from 1. The .id specifies that is the field that we want. Outputs can be tested with the debug module
- name: info test
debug:
msg: "{{ sg_pool_info.sg_info['grid/ilm-pools'].data[0].id }}"
This will print ‘p9829506629192568060’ to the screen.
You can find the Ansible modules in either Ansible Galaxy Collections or in Ansible Automation Platform as certified modules. Be sure to check back next month for an example of building an ILM rule set and policy using these techniques.
... View more
NetApp Connector allows organizations to securely connect their Enterprise data to Microsoft 365 Copilot, without the need for any data migration. Data can reside in any environment including On-Prem, Cloud (AWS, Azure or GCP), Cloud ONTAP or MSP.
... View more
NetApp is the most secure storage on the planet. With that in mind, let's look at the available technology for encrypting NFS traffic over the wire. The options include using NFS with Kerberos, running NFS over IPsec connections, and a nascent approach using NFS over TLS.
... View more
AWS offers a wide variety of AI and machine learning (ML), analytics, and serverless compute services that are integrated with Amazon S3 storage. This poses a challenge for organizations that have the relevant data stored elsewhere.
Up until now, the only way to make file data accessible to these AWS services was to copy the data to Amazon S3 buckets. While this makes it possible to use the AWS services, such data duplication needs to be carefully planned and carried out with precision, both of which can take time, adding cost and complexity for customers.
Now there’s a gamechanger: NetApp® and AWS recently announced that Amazon FSx for NetApp ONTAP supports access to FSx for ONTAP file data as if it were in Amazon S3, transforming how businesses can leverage file data for AWS services.
... View more