ONTAP Hardware

V-Series and FAST Cache

tom_maddox
8,843 Views

Hopefully I don't start a religious war here, but I have a question about interoperability between the V-Series filers (a V3160 in our case) and FAST Cache on the EMC Clariion CX4 series, specifically whether it's a good idea to enable FAST Cache on LUNs which are presented to the filers. More specifically, we have two classes data by and large, unstructured data (files) and virtual machines. Would enabling FAST Cache for either class of data potentially improve performance?

Thanks,

Tom

19 REPLIES 19

isaacs
8,781 Views

Hi Tom,

The short answer is that we don't yet fully understand the performance implications of a FAST-Cache enabled storage pool.

The long answer is that our testing to this point has been primarily to ensure interoperability with that feature.  My job this summer is to gauge the performance benefits and develop a set of recommendations for optimal performance.  We should have a Tech Report (TR) early this Fall/late Summer.  I'll be sure to let you know as soon as we do.

Dan Isaacs

TME - V-Series
isaacs@netapp.com

tom_maddox
8,781 Views

Thanks, Dan, it's good to know you guys are working on this. My understanding of how FAST Cache works (and this is as a non-specialist) is that the SAN analyzes usage of individual blocks in a FAST Cache-enabled LUN and promotes frequently-used blocks to the flash drive cache. EMC also mentions that FAST Cache works best with i/o that is non-sequential but has medium to high locality. Given what you know about WAFL, would you guess that enabling FAST Cache could be worthwhile, or would it be a waste of resources? I ask because we have immediate performance problems, and waiting 3-6 months for a recommendation is not ideal.

Thanks again,

Tom

isaacs
8,781 Views

My hypothesis is that it would not help that much with WAFL.  Or at least, not as much as having a FlashCache module in the V-Series would.  But I really want to test it to be sure.

Has a NetApp support case been opened for your performance issue?  Has a bottleneck been identified?

tom_maddox
8,782 Views

The bottleneck is the back end disks, which is leading to high DRAM cache utilization and degraded performance during usage spikes. We can throw more disks at the problem, but we would prefer to find a less wasteful solution.

isaacs
8,782 Views

Is there a FlashCache card already installed in the V-Series?  Are these primarily reads that are a problem?

tom_maddox
8,782 Views

We do have FlashCache installed, which has somewhat remedied the performance issues, but we still see write cache saturation on the filers from time to time, which significantly impacts latency.

radek_kubka
8,782 Views

My 2 cents:

EMC FAST Cache is about random read & *write* caching, whilst NetApp Flash Cache works for reads only.

NetApp Flash Pool in ONTAP 8.1.1 will be a direct equivalent of EMC FAST Cache, but it is limited to NetApp native disks.

Regards,

Radek

tom_maddox
8,781 Views

Here is my experience so far. I turned on FAST Cache for a single-plex aggregate hosting an NFS-mounted VMware datastore, and the write hit ratio is hanging out at about .125, and the read hit ratio is hanging out around .65 (both estimates are very rough based on a quick eyeball of the performance chart), with variances down to .5 and up to .89 for reads and .05 and .36 for writes. What's interesting (to me, anyway), is that the ratios are almost precisely reversed for the SP Cache: read hits are down in the same area as FAST Cache write hits, while write hits are actually even higher than FAST Cache read hits. Looking at the hits per second, FAST Cache read hits are significant, at >40, while everything else pales in comparison.

Overall, though, it seems as though roughly 80% of reads are coming out of either SP Cache or FAST Cache (mostly the latter), while almost all of the writes are hitting cache at some point. I know that conventional wisdom is that WAFL doesn't really benefit from write caching (or so I have read), but given that the LUN service time is relatively miniscule (topping out at 4 ms and generally staying between 1-2 ms), while the response time is >10x that amount generally, most of the data must be coming from cache, so FAST caching does seem to be boosting read performance. That's my conclusion, anyway, speaking as very much a novice when it comes to storage performance tuning.

Anyone have any feedback, critique, etc.?

isaacs
8,781 Views

Interesting!

With any perfomance test, it's important to understand the variables in play.

1.  How big was the dataset?

2.  What tools were used to drive the load?

3.  What options were set on the load generator?  (block size, # of threads)

We would also be interested in seeing Perfstat output comparing the same tests run with FAST Cache enabled vs. disabled.  This at least lets us look at how WAFL is benefiting from the faster storage.

Let us know if you need help getting Perfstat running, or need more information abnout it.

Thanks Tom!

tom_maddox
8,371 Views

The overall dataset is about 250 GB of desktop virtual machines running an assortment of software. I can run perfstat against the filer to for comparison. What options should I use?

isaacs
8,371 Views

Were the desktops deduped?  How much space within ONTAP were they using?  Just making sure, since if they were deduped, you may have as little as 20G of data, which would fit entirely in ours and the array's cache.

For perfstat, run it with these options:  -F -S -f [hostname] -l [login] -t 1 -i 5

What are you using to drive the load?  Is this just a subset of production desktops?  Or is the load artificial?

tom_maddox
8,371 Views

We are deduping the desktops, with an approximate efficiency of 34%, so approximate space consumption on disk was in the vicinity of 175 GB. This is standard desktop traffic, not artificial load.

I will run perfstat and post the results.

tom_maddox
7,423 Views

Where should I park the perfstat output?

isaacs
7,423 Views

Hi Tom,

You can email it to me if it's not too big.  isaacs @ netapp.com

radek_kubka
8,371 Views

I know that conventional wisdom is that WAFL doesn't really benefit from write caching (or so I have read)

Yes, that's really interesting (or puzzling), indeed.

The slide deck about NetApp Flash Pool differentiates between random writes - not benefiting from Flash Pool & random overwrites - benefiting from Flash Pool. Whilst I think I understand the difference between the former & the latter, I always though WAFL (write anywhere, hello?) can accelerate all types of writes.

isaacs
8,371 Views

To a point, yes, we can accelerate any write.  From the start, NetApp has always written to cache (system memory) and logged the writes to NVRAM.  Since the writes were logged to a battery-backed-up set of DIMMS, we could send the ACK back to the host, bypassing disk spindles altogether.

However, if we can't de-stage cache quick enough (before it fills up again), then we are at the mercy of the spindles.   That is where we expect FlashPools to help.  In most cases, FlashPools will allow us to get SAS drive performance out of SATA devices.  FlashCache currently does this for reads, but writes were still constrained by the slower SATA spindles.  FlashPools will do it for both. 

radek_kubka
8,371 Views

Hi Daniel,

I (think) I know the theory. What baffles me though, is the distinction between 'random writes' & 'random overwrites' - in a context of WAFL there should be no difference between these two.

Regards,

Radek

aborzenkov
7,423 Views

What baffles me though, is the distinction between 'random writes' & 'random overwrites' - in a context of WAFL there should be no difference between these two.

Well ... just random commuter's thoughts (you have to do something on the way )

First let's accept that Flash Pool accelerates writes. This sounds plausible at least for some workloads (e.g. under heavy sustained random read workload).

Now to benefit from Flash Pool CP should be considered complete as soon as data is on SSD. OTOH Flash Pool is just a cache, is not it, so data at some point must be moved to rotational disks. So we by design have some delay. If block is rewritten after it had been saved to SSD but before it was copied onto rotational disks. we effectively saved at least one disk IO.

This is blurred by snapshots, where we obviously cannot just through away unsaved blocks. But it is still good as marketing argument

radek_kubka
7,423 Views

If block is rewritten after it had been saved to SSD but before it was copied onto rotational disks. we effectively saved at least one disk IO.

That obviously makes sense.

So are we saying this is the only way Flash Pool improves write performance, i.e.by offloading very frequent random writes to a fairly small working set, which fits into Flash Pool?

Public