🚀 New in NetApp Console: Smarter Licenses, Subscriptions & Billing Preferences
We’re excited to share some big news from the NetApp Console team! 🎉M ...read more
Discover how NetApp’s AI Data Guardrails turn governance into a living system—enabling secure, compliant, and scalable AI platforms. From risk managem ...read more
By Mohammad Hossein Hajkazemi, Bhushan Jain, and Arpan Chowdhry
Introduction
Google Cloud NetApp Volumes is a fully managed, cloud-native storage s ...read more
NetApp Console delivers HIPAA (Health Insurance Portability and Accountability Act)- compliant data intelligence without storing ePHI
NetApp Console n ...read more
As hybrid cloud strategies continue to evolve, organizations are increasingly adopting Azure Local and Hyper-V to extend Azure services into their on-premises environments and Azure capabilities like Azure Virtual Machines for applications such as SAP, Oracle, and SQL, Azure VMware Solutions (AVS), Azure Red Hat OpenShift, Azure Virtual Desktops, AI & ML.
NetApp complements these Azure capabilities with ONTAP-powered solutions that span on-premises, hybrid, and cloud deployments. ONTAP AFF and ASA systems deliver enterprise-grade performance and resiliency for mission-critical workloads running in on-premises, while Azure NetApp Files provides a fully managed service for seamless integration with Azure. For organizations seeking a DIY approach to hybrid architectures, Cloud Volumes ONTAP offers flexibility and advanced data management. Together, these solutions enable easy migration, consistent enterprise capabilities, and significant cost savings, empowering businesses to scale efficiently without compromising performance or security.
As always, our focus remains on enabling customers to optimize workloads wherever they reside. Below is a brief overview of recent enhancements and capabilities designed to support hybrid and cloud deployments, helping organizations implement enterprise-grade solutions with greater efficiency and flexibility.
Microsoft Hyper-V – A strong alternative
With the recent licensing changes in the market, Microsoft Hyper-V has emerged as a compelling alternative to VMware for enterprise IT virtualization needs. ONTAP delivers enterprise-grade storage features that enhance Hyper-V environments with performance, reliability, and flexibility. Key capabilities include:
NetApp SMI-S Provider delivers integrated dynamic storage management for SAN and NAS within System Center Virtual Machine Manager (SCVMM)
ONTAP uniquely enables native copy offload between SAN and NAS, offering flexibility and efficient storage utilization, along with native space reclamation across NAS (SMB3 TRIM) and SAN (iSCSI/FCP with SCSI UNMAP)
New comprehensive backup and recovery for Hyper-V VMs ensures granular protection, restoration and long term retention capabilities.
Leading backup partners support ONTAP snapshots and SnapMirror for optimized, array-native backup and recovery
WAC extension integration for dynamic VM mobility
ONTAP PowerShell toolkit for quick and automated provisioning
Ransomware protection using built-in onbox ML
Lightning fast VM mobility – the true migration offloader
Migrating between hypervisors can be a complex process, requiring careful consideration of factors such as application dependencies, migration timelines, workload criticality, and the potential impact of downtime on business operations. However, with ONTAP storage and the NetApp Shift toolkit, this process becomes significantly simpler and more efficient.
The NetApp Shift toolkit offers an intuitive, graphical user interface (GUI) that enables seamless migration of virtual machines (VMs) across different hypervisors while converting virtual disk formats. Leveraging NetApp FlexClone technology, the toolkit accelerates VM disk conversion, ensuring rapid and efficient transitions. Additionally, it automates the creation and configuration of destination VMs, reducing manual effort and complexity throughout the migration process.
The screenshot above shows high throughput VM conversion - 8TB VM with 6 VMDKs migrated in under 4 Minutes using Shift toolkit.
VM Mobility Made Simple: WAC Extension with NetApp Shift Toolkit (Preview)
Today’s IT landscape demands agility and a unified management experience. Migrating virtual machines to Hyper-V shouldn’t involve juggling multiple interfaces. That’s why Windows Admin Center (WAC) and the NetApp Shift Toolkit join forces to deliver a single-pane-of-glass solution that makes VM mobility simple and efficient.
With this powerful integration, VMs can be seamlessly converted to Hyper-V from the familiar WAC interface. No complex scripts or no manual conversions, just a streamlined workflow that saves time and reduces risk.
Why does this matter?
Faster migrations with minimal downtime
Simplified management through WAC’s intuitive UI
No need of additional capacity, optimized storage leveraging NetApp’s enterprise-grade capabilities
Ready to make VM mobility simple? Explore the WAC extension with NetApp Shift Toolkit today and unlock the future of hybrid cloud.
Supercharge Azure Local with ONTAP: Scale on Demand, Without Limits (Preview)
While Azure local offers robust virtualization and integration with Azure Arc, as infrastructure grows beyond eight nodes or demands independent storage scaling, organizations should consider advanced optimizations that improve agility, efficiency, and long-term scalability. Enterprises also do not compromise on high-performance, low-latency storage solutions that can scale and integrate seamlessly with existing infrastructure. This is where NetApp AFF and ASA systems shine—especially when deployed with the Fibre Channel (FC) protocol.
NetApp AFF (All Flash FAS) and ASA (All SAN Array) platforms are purpose-built for high-performance workloads. They offer:
Sub-millisecond latency for mission-critical applications
Advanced data management with ONTAP, including snapshots, replication, ransomware protection and encryption
Unified protocol support including FC, iSCSI, NFS and SMB
High availability: keeping the workload running during complete site failure with synchronous replication and automated failover
Scalability: Easily scale storage independently of compute
Hybrid Cloud Ready: Integrates with Azure NetApp Files and Cloud Volumes ONTAP for cloud extension
When paired with Azure Local, NetApp AFF and ASA systems can act as external block storage arrays, providing FC-based cluster shared volumes (CSVs) for virtual machines and workloads that demand consistent performance and reliability. This allows seamless integration into existing SAN fabrics or new deployments using FC switches.
A typical deployment looks like this:
Configure Azure Local nodes with FC HBAs, drivers, MPIO settings and install NetApp Windows Host Utilities.
Deploy the Azure local cluster
Connect all cluster nodes to the ONTAP storage system via Fibre Channel and complete zoning and WWN registration on the SAN array.
Create volumes and LUNs. Unmask them to the WWNs of all cluster nodes and then initialise and format the LUNs as NTFS Cluster Shared Volumes (CSVs).
Define Azure Arc VM storage paths pointing to these CSVs
This architecture ensures that storage is decoupled from compute, enabling independent scaling and maintenance while leveraging ONTAP’s rich feature set.
Be among the first to experience the future of hybrid cloud. An exclusive preview of Azure Local integrated with NetApp external storage is now open to a limited number of customers. Organizations running or planning to deploy Azure Local and able to validate Fibre Channel connectivity can secure a spot in this limited opportunity. Contact the NetApp account team today to join this early access program!
Azure NetApp Files and Hybrid Workloads – Powering Enterprise workloads in Azure
While NetApp AFF/ASA systems provide on-premises performance, Azure NetApp Files, a fully managed service, offers a cloud-native extension for born in the cloud applications and hybrid workloads. Azure NetApp Files (ANF) delivers high-performance, low-latency storage natively in Azure, enabling businesses to run mission-critical workloads with confidence. From Azure VMware Solution, Azure Virtual Desktop, SAP, Oracle databases, and AI/analytics, ANF simplifies complex storage needs while ensuring scalability and security. For instance, Azure NetApp Files supports supplemental NFS datastores for Azure VMware Solution, enabling seamless data mobility and disaster recovery.
Key highlights that spark curiosity:
Cache Volumes (Preview): Cloud-based caches keep hot data close to applications for faster throughput and reduced footprint.
Migration Assistant (Preview): A streamlined experience for effortless data migration enabling Hybrid Cloud with SnapMirror for replicating on-premises data to Azure.
Object API Integration: Expose enterprise data directly to OneLake, enabling instant access for Microsoft Fabric, Azure AI Foundry, Copilot Studio, and M365—without ETL or duplication.
OpenShift Virtualization Support (Preview): ANF now integrates with Azure Red Hat OpenShift, enabling enterprise-grade storage for containerized and virtualization workloads.
Rapid Cloning & Backup: GA features include short-term clones, large-volume backups, and granular file restores.
Why is this unique?
These capabilities are only possible with NetApp ONTAP, the proven enterprise storage platform behind ANF. ONTAP delivers unmatched performance, integrated data protection, and advanced features that no other Azure-native storage service can offer—making ANF the ultimate solution for hybrid cloud, containerized environments, and AI-driven workloads.
Organizations seeking a self-managed approach to storage can leverage NetApp Cloud Volumes ONTAP, a flexible DIY solution. It provides enterprise-grade capabilities such as data protection, efficiency, and mobility across hybrid and multi-cloud environments.
This hybrid model allows organizations to:
Use AFF/ASA for primary workloads on Hyper-V or Azure Local
Leverage Azure NetApp Files for born in the cloud applications, enterprise databases, AI/ML use cases, Disaster recovery or burst capacity and many more
Control the data no matter where it is
Maintain consistent ONTAP capabilities across environments
To conclude, ONTAP systems are ideal companions to on-premises, Cloud and Hybrid deployments, especially with the protocol of choice for performance and reliability. Together, they enable a modern, scalable, and hybrid-ready infrastructure that meets the needs of enterprise workloads today and tomorrow.
Safe Harbor Statement: Any unreleased services or features referenced in this blog are not currently available and may not be made generally available on time or at all, as may be determined in NetApp’s sole discretion. Any such referenced services or features do not represent promises to deliver, commitments, or obligations of NetApp and may not be incorporated into any contract. Customers should make their purchase decisions based upon services and features that are currently generally available.
... View more
As enterprises move more workloads to the cloud, the stakes for availability, data protection, and data security have never been higher. Industries like financial services, healthcare, government, retail, and global enterprises now demand cloud architectures that can withstand disruptions, eliminate downtime, and meet strict regulatory requirements all without adding complexity.
But one of the hardest challenges in cloud architecture hasn’t changed: how to balance high availability, consistent performance, and in-region data residency requirements without creating a maze of operational overhead. Traditional approaches to zone redundancy often force teams to assemble complex architectures with duplicate volumes, custom failover scripts, or cross-zone replication that adds latency and cost. These solutions can work but they are rarely simple, efficient, or aligned with the expectation that cloud services should “just work.”
That’s what makes Azure NetApp Files’ Elastic zone-redundant storage service level (Elastic ZRS) such a meaningful advancement. Now in public preview, Elastic ZRS is a new service (with different capabilities from the existing ANF service levels) that delivers cloud native, multi‑AZ resilience with shared QoS at the pool level, maintaining low single‑digit millisecond latency and ensuring zero RPO for cross‑zone high availability without requiring complex cloud storage manipulation or compromising overall performance.
Why Customers Need a Better Option
Cloud architects consistently point to a set of recurring challenges when trying to build resilient applications within a single Azure region.
Eliminating Single Zone Vulnerabilities - Applications running in a single availability zone remain exposed if that zone experiences a disruption. Even rare incidents can cascade into major business impacts. Customers need built‑in, infrastructure‑level resilience that ensures a zonal failure doesn’t result in downtime or data loss.
Reducing the Complexity of DIY High Availability - Managing redundant infrastructure, coordinating replication, and maintaining failover scripts consumes valuable engineering time. It increases operational overhead and slows innovation. Customers want simple, built‑in resilience not another custom solution they have to maintain.
Minimizing Downtime and Its Business Impact - For many organizations, even a few minutes of downtime carry real financial risk from missed transactions and customer churn to reputational damage. Teams need a solution designed for virtually uninterrupted operations.
Introducing Elastic ZRS: A New Approach to High Availability
Azure NetApp Files Elastic ZRS tackles these challenges head on by delivering a fully integrated, multi-AZ high availability storage experience designed for modern cloud environments.
High Resiliency - Elastic ZRS synchronously mirrors every write across multiple availability zones within the region committing data everywhere before acknowledging the operation. This architecture delivers true zero data loss (zero RPO). If a zonal outage occurs, failover is completely automatic and transparent, requiring no changes to your applications.
Near Zero Data Loss Failover—With Zero Complexity - Because the platform handles failover natively, your storage endpoint never changes. Applications and stateful Kubernetes workloads continue running without interruption. There’s no need for manual scripts, custom failover logic, or operational overhead.
Enterprise Data Management - Elastic ZRS supports the same enterprise grade features: NFSv3, NFSv4.1, SMB, snapshots, cloning, encryption, and Azure Backup integration as the other ANF service levels.
Optimized Throughput and Metadata Speed - By intelligently allocating QoS at the pool level and optimizing metadata performance across zones, Elastic ZRS supports metadata intensive workloads such as applications managing large numbers of small files.
Where Elastic ZRS Delivers Immediate Benefits
Corporate File Shares - User directories, team shares, and enterprise content remain accessible without interruption ensuring productivity doesn’t take a hit.
Financial Services and Trading Systems - With strict requirements around uptime, compliance, and consistency, financial platforms benefit from nonstop operations and zero‑loss resilience.
Regulated and Business Critical Workloads - Elastic ZRS ensures nonstop access to essential applications like compliance apps even if an AZ fails as data stays online with zero outages or lost transactions.
Kubernetes and Containerized Applications - Stateful apps gain synchronized data protection and automated zone failover critical for modern cloud native environments.
A Simpler, More Resilient Future
Azure NetApp Files Elastic ZRS marks a major step forward in how organizations can build resilient, in-region cloud architectures. It blends Azure’s robust infrastructure with NetApp’s proven enterprise storage capabilities to provide always-on, multi-AZ resiliency that’s simple, fast, and efficient.
For leaders looking to modernize mission critical platforms while reducing operational overhead, Elastic ZRS offers a clear path forward: unified resilience, predictable performance, and turnkey operations built for mission critical workloads platforms while reducing operational overhead.
Explore more:
Blog: Enhanced storage resiliency with Azure NetApp Files Elastic zone-redundant service
Understand Azure NetApp Files Elastic zone-redundant storage service level
Quick Bytes: Azure NetApp Files Elastic zone-redundant storage service level
How-to: Azure NetApp Files Elastic zone-redundant storage service level
... View more
Automating StorageGRID with Ansible makes it easy to run common workflows quickly, consistently, and at scale. Tasks that would normally require clicking through the Grid Manager UI can be codified, repeated, and version-controlled. However, automation quickly runs into a common problem: many workflows require information that isn’t known ahead of time.
IDs for storage pools, tenants, users, ILM components, and other resources are generated per grid and differ between environments. Hard-coding these values defeats the purpose of automation and makes playbooks brittle, environment-specific, and difficult to reuse.
This is where the StorageGRID information modules—na_sg_grid_info and na_sg_org_info—become essential. These modules allow Ansible to dynamically query the Grid Manager or Tenant Manager APIs, gather the current state of the system, and expose that information for use in subsequent tasks. The grid module collects information at the Grid Manager level, while the org module focuses on tenant-level data.
Gathering this information is straightforward, but effectively using it is often less obvious. The returned data is structured JSON, and knowing how to reference the right fields is key to building flexible, reusable playbooks. The examples below walk through how to collect grid information, inspect the returned data, and extract specific values—such as storage pool IDs—that are commonly required for tasks like building ILM rules and policies.
Lets look at a simple playbook that runs the ‘grid’ information collection.
---
- name: Data Look up
hosts: localhost
gather_facts: false
vars:
grid_user: root
grid_password: <password>
grid_address: https://<yourgrid>
tasks:
- name: generate auth token for grid module on StorageGRID
netapp.storagegrid.na_sg_grid_login:
hostname: "{{ grid_address }}"
username: "{{ grid_user }}"
password: "{{ grid_password }}"
validate_certs: false
register: auth
- name: Gather GRID info
netapp.storagegrid.na_sg_grid_info:
api_url: "{{ grid_address }}"
auth_token: "{{ auth.na_sa_token }}"
validate_certs: false
register: sg_info
If you need a reference on how to use the grid_login module see my last post on this topic - StorageGRID Automation: A Resolution You Can Keep - NetApp Community
Using the ‘register’ line at the bottom saves all the returned information to a variable called ‘sg_pool_info’. With this any of the information can be accessed and used. The format for would look like this.
“{{ sg_info.sg_info[‘grid/<subset>’] }}”
Here ‘sg_info’ is a dictionary that is keyed by the different subsets
Subset can be any of the following.
accounts alarms audit compliance-global config config/management config/product-version deactivated-features dns-servers domain-names ec-profiles expansion expansion/nodes expansion/sites firewall-blocked-ports firewall-external-ports firewall-privileged-ips gateway_configs grid-networks groups ha-groups health health/topology identity-source ilm-criteria ilm-grade-site ilm-grades ilm-policies ilm-pools ilm-rules license management-certificate network-topology ntp-servers recovery recovery/available-nodes regions schemes single-sign-on snmp storage-api-certificate untrusted-client-network users users/root versions vlan-interfaces
As you can see a vast amount of information is available. Gathering each of these takes time, and usually will be unused data. For this reason the ‘gather_subset’ option exits. For example when making ILM rules with Ansible, the ID of the storage pool is required and this data is in the subset ilm-pools. We can gather just the ilm-pool information like this.
- name: Gather pool ID
netapp.storagegrid.na_sg_grid_info:
api_url: "{{ grid_address }}"
auth_token: "{{ auth.na_sa_token }}"
validate_certs: false
gather_subset:
- grid_ilm_storage_pools_info
register: sg_pool_info
I also changed the variable name that I register the information as to make it easier to track.
Getting specific information requires understanding the json format of the subset. For example when making ILM rules with Ansible, the ID of the storage pool is required and this data is in the subset ilm-pools.
On an example grid this information looks like this.
"grid/ilm-pools": {
"apiVersion": "4.2",
"data": [
{
"archives": [],
"disks": [
{
"grade": null,
"group": 10,
"siteId": "bbf88845-a108-4210-8869-6c33465eccfc"
}
],
"displayName": "gdl",
"id": "p9829506629192568060",
"name": "gdl"
}
],
"responseTime": "2026-01-23T16:20:39.930Z",
"status": "success",
"status_code": 200
},
This is an easy example to use because there is only one storage pool. To reference the ID of this pool the variable would look like this.
"{{ sg_pool_info.sg_info['grid/ilm-pools'].data[0].id }}"
This example assumes a grid with a single storage pool. In environments with multiple pools, you would filter the data list based on name or siteId.
So that’s the grid/ilm-pools subset, in that the ‘data’ section and the [0] means the first entry since unix counts from 0 instead of from 1. The .id specifies that is the field that we want. Outputs can be tested with the debug module
- name: info test
debug:
msg: "{{ sg_pool_info.sg_info['grid/ilm-pools'].data[0].id }}"
This will print ‘p9829506629192568060’ to the screen.
You can find the Ansible modules in either Ansible Galaxy Collections or in Ansible Automation Platform as certified modules. Be sure to check back next month for an example of building an ILM rule set and policy using these techniques.
... View more
NetApp Connector allows organizations to securely connect their Enterprise data to Microsoft 365 Copilot, without the need for any data migration. Data can reside in any environment including On-Prem, Cloud (AWS, Azure or GCP), Cloud ONTAP or MSP.
... View more
NetApp is the most secure storage on the planet. With that in mind, let's look at the available technology for encrypting NFS traffic over the wire. The options include using NFS with Kerberos, running NFS over IPsec connections, and a nascent approach using NFS over TLS.
... View more