AFF

SSD and SATA

chriskranz
8,728 Views

Firstly I'm a big fan of the new kit, it all looks really well planned out and I like some of the new features and onboard goodies!

 

One question that has come up a few times for me. Mixing SSD and SATA seems like it would make perfect sense, especially as a lot of other vendors are pushing this. What are going to be the challenges here? Are we going to see automated tiering or custom read-cache configuration (as in, can we assign some SSD drives to read-cache similar to what PAM offers us)?

1 ACCEPTED SOLUTION

woods
8,728 Views

Confirming what Scott Gelb wrote, data from SSDs will be excluded from Flash Cache (PAM II). Only data from rotating media will find its way into Flash Cache. That said, data from SSDs is *not* excluded from the first level of read cache (aka the WAFL buffer) in controller memory.

(For more info about intelligent caching in NetApp systems, see in WP-7107 http://media.netapp.com/documents/wp-7107.pdf).

Regarding auto-tiering software, the main idea is to optimize performance and cost by putting your hot data into fast media and your less-used data into lower cost media. NetApp is solving this problem in different way with intelligent caching (e.g. Flash Cache / PAM II) which is more effective than auto-tiering in several respects.

First, newly hot data blocks are copied into NetApp read cache in real time.  Several hours (even days) may pass before auto-tiering software initiates the movement of hot data from the SATA tier to the SSD tier.  Later when the block becomes cold it will be evicted (actually written over) from Flash Cache as hotter data is cached, as opposed to moving it again from SSD to SATA.

Second, the data granularity is much finer with NetApp caching than for auto-tiering software.  The smallest size data chunk I am aware of for auto-tiering implementations is 512kB and can get as large as 1GB!  So, cold data is likely to be promoted with the hot data.  In contrast, granularity is 4kB when data is copied from disk into a NetApp intelligent read cache.

Third, the burden on the storage controller and the traffic in the storage subsystem is lower with intelligent caching than with auto-tiering. This translates into better system performance because the storage controller is not burning a lot of cycles moving data back and forth between tiers.  And data that is served out of read cache in the storage controller (as opposed to on an SSD) reduces back-end traffic so writes and other reads that must go to disk get executed faster.

Regarding custom read cache configuration ... Flash Cache is like an automatic transmission that lets you shift the gears if you prefer. (I've got one of those my VW Passat).  You can use the FlexShare feature of Data ONTAP to give caching priority to some volumes and to starve others.  For more info about customizing the read cache configuration, see the Storage Nuts & Bolts blog (http://blogs.netapp.com/storage_nuts_n_bolts/2010/04/index.htm).

All of this begs the question of why NetApp bothered to introduce SSDs at all. The reason is that there are some workloads that require every read from the storage system to be fast. The only way to ensure low latency on every read is to put the volume or LUN into SSDs.  Not many workloads have this requirement.  Flash Cache does the job for the majority of workloads that would benefit from using SSDs at lower cost and with no administration.

View solution in original post

3 REPLIES 3

scottgelb
8,727 Views

SSD should bypass FlashCache if both are on the same system, but SSD will not act as a cache for SATA or any other aggregates... the best practice I saw was to not mix SSD and SATA on the same controller (a similar thread on this with FC/SAS not mixed with SATA has been a debate on here too)...but likely because of CP events taking longer on SATA which could slow down SSD.

woods
8,729 Views

Confirming what Scott Gelb wrote, data from SSDs will be excluded from Flash Cache (PAM II). Only data from rotating media will find its way into Flash Cache. That said, data from SSDs is *not* excluded from the first level of read cache (aka the WAFL buffer) in controller memory.

(For more info about intelligent caching in NetApp systems, see in WP-7107 http://media.netapp.com/documents/wp-7107.pdf).

Regarding auto-tiering software, the main idea is to optimize performance and cost by putting your hot data into fast media and your less-used data into lower cost media. NetApp is solving this problem in different way with intelligent caching (e.g. Flash Cache / PAM II) which is more effective than auto-tiering in several respects.

First, newly hot data blocks are copied into NetApp read cache in real time.  Several hours (even days) may pass before auto-tiering software initiates the movement of hot data from the SATA tier to the SSD tier.  Later when the block becomes cold it will be evicted (actually written over) from Flash Cache as hotter data is cached, as opposed to moving it again from SSD to SATA.

Second, the data granularity is much finer with NetApp caching than for auto-tiering software.  The smallest size data chunk I am aware of for auto-tiering implementations is 512kB and can get as large as 1GB!  So, cold data is likely to be promoted with the hot data.  In contrast, granularity is 4kB when data is copied from disk into a NetApp intelligent read cache.

Third, the burden on the storage controller and the traffic in the storage subsystem is lower with intelligent caching than with auto-tiering. This translates into better system performance because the storage controller is not burning a lot of cycles moving data back and forth between tiers.  And data that is served out of read cache in the storage controller (as opposed to on an SSD) reduces back-end traffic so writes and other reads that must go to disk get executed faster.

Regarding custom read cache configuration ... Flash Cache is like an automatic transmission that lets you shift the gears if you prefer. (I've got one of those my VW Passat).  You can use the FlexShare feature of Data ONTAP to give caching priority to some volumes and to starve others.  For more info about customizing the read cache configuration, see the Storage Nuts & Bolts blog (http://blogs.netapp.com/storage_nuts_n_bolts/2010/04/index.htm).

All of this begs the question of why NetApp bothered to introduce SSDs at all. The reason is that there are some workloads that require every read from the storage system to be fast. The only way to ensure low latency on every read is to put the volume or LUN into SSDs.  Not many workloads have this requirement.  Flash Cache does the job for the majority of workloads that would benefit from using SSDs at lower cost and with no administration.

chriskranz
8,728 Views

Thank you woods, that is a very comprehensive explanation!!!

Public