Tech ONTAP Blogs
Tech ONTAP Blogs
When was the last time you deleted old files to clean up your file system? And what about the project share your team uses? Does your company have a process for deleting old files? What about the files created by the colleague who moved on years ago? Is the 10-year-old file in your share still being used, or is it just taking up disk space, and no one cares about it anymore? Do these questions sound familiar—and you don’t have good answers? Then the auto-tiering feature of Google Cloud NetApp Volumes is for you!
File systems are a common place to store unstructured data. They hold files, and users can organize them into logical hierarchies using directories with meaningful names. Apart from file name, directory name, size, creation, last modification, and last access timestamps, there isn’t much metadata on individual files. Why was the file created? What purpose does it serve? Is it still required? How long does it need to be stored? Can we delete it? Most applications are missing a user workflow to collect that information, and file systems don’t store it.
File systems tend to get filled with more and more potentially stale and outdated files. Who knows if they’re still of value or just hold last year’s canteen menu? Users don’t delete files; why should they? Storage keeps getting bigger and cheaper. It’s more expensive to search for deletable files than to just keep storing them. It’s also less risky.
But does that data need to be stored on expensive primary storage? Isn’t there a way to store that data in a more cost-efficient way while retaining transparent access?
For files stored in Google Cloud NetApp Volumes, the answer is auto-tiering. It’s a built-in feature that moves data between the primary storage—a hot tier—and a cheaper secondary storage—a cold tier. It only moves data that hasn't been read for a configurable number of days and is considered cold. As soon as the data is accessed again, the feature calls the data back to the hot tier.
Although the hot tier is higher performing, the cold tier is more cost-efficient. You want as much data as possible stored in the cold tier to optimize cost, but you want to store all your active data in the hot tier to meet your performance targets. For every volume, you can specify how many days a file needs to be unread before it’s considered cold; the default value is 31 days. Although going lower sounds tempting, it might affect production performance and even have diminishing returns on cost. Don’t be too greedy.
Behind the scenes, sophisticated technology keeps everything working just the way you need it to. For example, it detects large sequential reads when reading from the cold tier and lets them bypass the hot tier. Otherwise, a large sequential read of the full volume (think antivirus scanner or file-based full backup) would reset the temperature of all data and mark it as hot.
Auto-tiering has been available for storage pools in the Premium and Extreme service levels for many months now. In May 2025, Flex pools finally joined the party.
We just announced the availability of auto-tiering for custom-performance Flex zonal pools as a preview. Although the functionality is basically the same as with the other service levels, there are a few subtle differences to be aware of.
In Premium and Extreme, you define volume size. As data cools off, it’s moved to the cold tier. All data in the cold tier is charged at cold tier pricing. All data in the hot tier and all unused space in the volume is charged at hot tier pricing. Every GiB of hot tier contributes 64KiBps (for Premium) or 128KiBps (for Extreme) to the volume’s throughput capability, whereas every GiB of cold tier only contributes 2KiBps. As a consequence, you might have to add empty space to the volume’s hot tier to ensure sufficient throughput capability.
In Flex, performance is defined on the storage pool level, and all volumes share the pool's performance. With Flex custom performance, you can adjust the capacity and performance of a pool independently:
When using auto-tiering, you define pool size and hot tier size independently. The hot tier is charged at hot tier pricing, and the cold tier is charged for the amount of data stored in it.
If too much data is hot, you might have to increase the hot tier size or risk running into out-of-space errors. You can do that manually or enable auto-increase of the hot tier. Auto-increase can be enabled or disabled for the pool. If enabled, it grows the auto-tier by 10% as soon as the tier becomes full.
In normal operation, the amount of hot data is determined by your active working set, which is the frequently changed data that should remain in the hot tier. A properly sized hot tier is large enough to hold the active working set with some margin.
There might be instances in which an unusually high amount of data is being written to a volume within the pool. For example, when you migrate data, all the data you write to a volume is considered hot. Because all of this data is new to the volume, NetApp Volumes cannot distinguish between actual hot data and data that hasn’t been touched for years, like old files or archived data. This could cause your hot tier to become full quickly or trigger an auto-increase, making the hot tier larger than necessary for normal operations.
For cases such as migration, Flex offers hot tier bypass. You can enable hot tier bypass on a per-volume level that tells the service to send all written data directly to the cold tier. Hot tier bypass helps avoid flooding or unintended auto-increase of the hot tier. After the migration is complete, you can switch hot tier bypass off, and normal operation resumes. Your workloads might experience reduced performance on the first data access until all active data is called back to the hot tier.
In short:
Custom-performance Flex pools are a great tool for cost saving. You can precisely size them for your capacity and performance requirements. Their capacity pricing is already highly competitive, and if you use auto-tiering for cold data storage, the solution becomes even more cost-effective. At the same time, you can tailor their performance to your exact needs—so you’re never paying for more than you actually use.
Auto-tiering for Flex zonal storage pools is now available in preview. For more details, see Manage auto-tiering in the Google Cloud NetApp Volumes documentation. To talk to our experts on how to deploy Flex zonal storage pools, contact our Google Cloud specialists.