Data ONTAP Discussions

Any future for storage tiering?

Tom Georgens reported on the register as saying no more storage tiering...

I am looking to purchase storage tiering so can it be true?



Re: Any future for storage tiering?

Thnking more about the issue I can see how SATA and PAM will work going forward with 'hot' blocks in system memory, then 'warm' blocks with meta data in system memory/PAM and then cold blocks on SATA.

What do you think?

Re: Any future for storage tiering?

While I would never begin to speak for Tom, but the way I read this is not data won't live on appropriate speed media.  If that's your idea of tiering, then cool.

What he's saying is a dying idea is the idea of having software manually move datasets between tiers.  I don't want to have to move my database from SSD to FC/SAS, to SATA and vice versa.

Rather, imagine a world where everything really lives on SATA or some equivalent cheap/deep/slower storage and Flash (whether PAM, SSD, whatever) is used as a read (and potentially write) cache to speed up the exact workload that is being used.

This is kind of like the HSM (hierarchical storage management)  idea.  It was a huge buzz word in its day and largely didn't work because people wanted HS, but no one wanted the M part.   Caching algorithms kind of do that automatically, so why implement yet another software layer to try and figure out when to move datasets and then go through the pain of moving them when caches basically do the same thing for free (since just about every storage system does some of caching today).

So I suppose it depends on what you mean by storage tiering.  I see what Tom is talking about is the idea of moving whole datasets between tiers using specialized software to do it.  Caching can do the same thing, but much simpler.

Just my take.  This is an interesting discussion...looking forward to seeing what folks think.

Re: Any future for storage tiering?

I have run in to this discussion a few times in my customers. It starts with "today i am doing storage tiering with software-X which talks to software-Y that then needs to interact with the Storage devices and the hosts as well. It works great" until you have 15 moving parts you now to worry about in terms of software or appliances facilitating the actual move of the data to different tiers of storage and managing yet another group of policies for this process. This then facilitates stub growth on primary storage systems and possibly another set of software suites to manage that!! Most of the times, this also turns in to orphaned files that now may or may not be corrupt. 

In environments that complexities are trying to be reduced, why not have 1 Tier of low cost disk with high performance PAM cards and not have to worry about having my data on FC drives today and worrying about policy enforcement to move the data to other tiers, etc... I truly believe that this concept of storage tiering will end like Tom is saying and my customers are beginning to see this as the right step moving forward.

Re: Any future for storage tiering?


Some interesting comments appeared there:

Personally though I think that e.g. Compellent tiering is really straightforward & even if not technically superior than PAM/SATA combo, it is just easier to grasp its concept for an average customer.

To me the bottom line is, any sort of tiering and/or ILM has a chance to actually work only if it requires minimum (or no) intervention from admin / user side.



Re: Any future for storage tiering?

I would tend to agree with this but the big kicker for us is that we use MetroCluster and that is currently only supported with FCAL drives.

It's also more than just what kind of storage your files are sitting on. There are issues about not constantly backing up the same files over and over again and also compliance.

Re: Any future for storage tiering?

Given our recent announcement of NetApp DataMotion for Volumes, I thought I would reopen this discussion.

Many vendors are promoting tiering solutions of one form or another. As you have read from this thread and can read further regarding NetApp's position on intelligent caching vs traditional tiering with this article

Putting the tiering discussion aside, a discussion primarily focused on I/O performance, there are other reasons why you would want to move data from one set of disks to another.  As your storage capacity scales to meet data growth, you may experience hot spots on specific aggregates or volumes. The ability to non-disruptively move volumes off of highly active disks to less active disks can improve overall system performance. 

Additionally, you may run into a situation where you have multiple volumes on a single aggregate and due to data growth, need to move that volume to an aggregate with fewer volumes in order to expand that particular data set.

Another reason may include the servicing of hardware or upgrades.

NetApp DataMotion for Volumes addresses requirements around non-disruptive operations which includes more than just performance related activities.

Here is a great TR on using NetApp DataMotion for Volumes in Enterprise applications, such as SQL.