Deep Learning (DL) is the subfield of Artificial Intelligence (AI) that focuses on creating large neural network models capable of data-driven decisio ...read more
In the first post of our series, we explored the AI/ML workflow through the lens of a Medallion Data Architecture. We explained our rationale to ident ...read more
The new SSD capacity decrease capability of FSx for ONTAP Gen-2 file systems, transforms high-performance storage workloads management on AWS, offerin ...read more
I'm excited to kick off a new blog series called Back to Basics (B2B). The goal is to revisit fundamental concepts that often slip through the cracks ...read more
Discover how NetApp delivers real business value across ten major industries in the second part of our blog series. While Part 1 set the stage by exploring today’s data challenges and positioning NetApp as the solution, Part 2 dives into industry-specific customer stories—from retail and healthcare to energy and telecom—showcasing why leading organizations “stop by & choose” NetApp over competitors. Each story highlights how NetApp’s ONTAP and StorageGRID platforms enable unified, secure, and scalable data management, helping businesses overcome compliance hurdles, reduce costs, accelerate innovation, and ensure reliable performance. Whether your data lives on-premises, in the cloud, or both, see how NetApp’s future-proof ecosystem empowers organizations to unlock the full value of their data and stay ahead in a rapidly evolving digital landscape.
... View more
Key Highlights: Why NetApp Stands Out in Today’s Data-Driven World
In a landscape where data is growing faster than ever and complexity is the new normal, organizations face mounting challenges: data silos, unpredictable costs, security threats, and the pressure to innovate at speed. Generic storage solutions often fall short—lacking the flexibility, integration, and protection that modern enterprises demand.
NetApp offers a smarter path forward. By combining the power of ONTAP for unified file and block storage with the scalability of StorageGRID for object storage, NetApp delivers a hybrid data platform built for today’s realities. Whether you’re looking to land new workloads, expand seamlessly across clouds, or protect your most valuable data assets, NetApp’s solutions are designed to adapt and grow with your business.
Ready to discover how NetApp can transform your data strategy? Stay tuned for real-world examples and actionable insights with this blog.
... View more
Storage is the bedrock of modern enterprises, functioning as a core component across various operating systems and supporting NFS, SMB, and dual-protocol environments. As organizations accelerate their cloud migration, the perennial challenge is to secure a storage solution that delivers enterprise-grade performance, unparalleled flexibility, and cost efficiency, all without the operational burden of traditional on-premises infrastructure. For technical audiences seeking a robust, scalable, and fully managed solution, Google Cloud NetApp Volumes Flex service level is the definitive answer for file storage in the cloud.
Unprecedented flexibility and elastic scaling
Google Cloud NetApp Volumes Flex redefines scalability for file storage. Engineered for extreme elasticity, it scales seamlessly from a mere 1GiB up to a massive 300TiB. This inherent flexibility means that whether you're starting with a small development environment or managing vast enterprise datasets, Flex can precisely accommodate your needs. This flexibility eliminates the need for complex, upfront capacity planning and overprovisioning, enabling you to right-size your storage from day 1 and effortlessly expand as your data footprint evolves.
Auto-tiering: Intelligent cost optimization
A standout feature of Flex is its intelligent auto-tiering support. This capability automatically identifies and moves infrequently accessed data to a lower-cost tier, delivering exceptional value at its $0.03/GiB/month price point. With intelligent optimization, you benefit from cost-effective, high-speed file storage without requiring manual intervention, significantly reducing the total cost of ownership. Instead of expending valuable engineering resources on rearchitecting existing, functional file workloads, you can leverage Flex to optimize costs while maintaining performance, freeing your teams to focus on innovation.
Granular performance control: True flexibility
Flex offers precise control over performance, enabling independent scaling of capacity, throughput, and IOPS to align meticulously with application requirements. This unprecedented on-demand scalability fosters agility in the cloud environment. Flex provides scalable and consistent performance, achieving up to 5GiBps of throughput and 160K IOPS, suitable for critical batch jobs requiring burstable performance or database operations that demand sustained, high-throughput capabilities. This customizability means optimal resource utilization and avoids wasteful overprovisioning.
Built-in cloud integration and global availability
As a fully managed Google Cloud service, NetApp Volumes offers a seamless experience within the Google Cloud ecosystem. Its global availability provides the strategic advantage of deploying your applications closer to your customers or wherever your operational requirements dictate, minimizing latency and enhancing the user’s experience. Migrating to the cloud no longer necessitates compromises on management overhead, performance, or cost.
Developer friendly, optimized for Kubernetes
Google Cloud's commitment to developers extends to NetApp Volumes. To facilitate rapid development and testing, the first 1TiB of storage with the Flex service level includes a generous performance profile of 1024 IOPS and 64MiBps. This performance profile empowers development teams to quickly onboard and iterate on their workloads without initial cost barriers.
Crucially, the NetApp Volumes Flex service level is optimized for Google Kubernetes Engine and Red Hat OpenShift, extending the benefits of fully managed, scalable, and robust cloud storage directly to your Kubernetes and containerized workloads. Supporting all access modes and Kubernetes distributions, the Flex service level means broad compatibility for both traditional and modernized application architectures.
Robust data management, protection, and compliance
Powered by NetApp ® Snapshot™ technology, NetApp Volumes offers a comprehensive suite of data management capabilities and robust data protection features. Expedited recovery options are crucial for mitigating ransomware attacks, while integrated in-region or cross-region backups provide data resilience and adherence to regulations. Enforceable retention periods further secure backups with immutable and indelible safeguards, which is crucial for meeting strict regulatory compliance and providing tamper-proof, read-only data duplicates in nonprimary locations, thereby enhancing ransomware protection.
Furthermore, Assured Workloads support for NetApp Volumes simplifies the complex procedure of satisfying and supervising regional data boundary compliance and controls regarding data residency, cryptographic access, and personnel access.
Unlock the best of cloud and storage
Google Cloud NetApp Volumes Flex is a paradigm shift for enterprise storage in the cloud. It's a no-compromise solution that blends enterprise-grade performance, unmatched flexibility, intelligent cost optimization, and robust data protection, all within a fully managed service.
Are you ready to experience the power of NetApp Volumes Flex? Get started today with a hands-on lab, explore the comprehensive product documentation, or spin up a volume directly in the Google Cloud Console to test drive this transformative service for your most demanding file workloads.
... View more
Is it getting harder to find space in your datacenter? Do you have sustainability goals that seem to get farther away? Are the performance requirements for your object storage rising?
If you paused, nodded, or even said “yes” out loud to any of those questions, then this announcement may as well have been written for you.
... View more
Models don't usually fail because the code went rogue. They fail because the data moved. Schemas shift, labels drift, "latest.csv" isn't what you think it is, and auditors (compliance regulators, or even lawsuits) show up without an invite. If you've ever retrained the same code and got a different model, you've met the real culprit: unpinned and unprovenanced data.
Most everyone talks about solutions "at inference" which is far too late to start talking about compliance-based AI architectures. This blog post starts at the beginning, where it should... from the data scientist's perspective. How can we prove that the data, the most critical component, was used to produce the model, LLM, fine-tune, and embeddings in our solution?
... View more