ONTAP Discussions

Has anyone enabled 'Free Space Reallocate' and 'Read Reallocate' in ONTAP 8.1.1 ?

ASHWINPAWARTESL
10,938 Views

My understanding is that Filers with good dedupe savings, will cause 'fragmentation' in data layout and will eventuall lead to:

1. slow sequential reads
2. eventually cause high disk utilization which in turn may cause latency
3. causes non-contigous free space, which will affect writes.


In that case, following option in 8.1.1 seems attractive.

filer>aggr options aggr_name free_space_realloc on
fller>vol options vol_name read_realloc space_optimized

Note: space_optimized option is synonymous with the physical reallocation method.

Accodring to TR-3929 (Reallocate Best Practices) -These two are complementary technologies that help maintain optimal layout. Read reallocate will optimize the system for sequential reads on the fly, while
free space reallocate will optimize for writes.


But, Read reallocate is a volume option that performs opportunistic reallocation on data to improve read performance. So I believe it again depends on the read/write ratio of your applcation, therefore you need to selective before enabling this option on a particular volume ?


Any thoughts folks?

12 REPLIES 12

TOMASYAWORKS
10,878 Views

Mmmm 182 views, no answers

Let me ask this question again: Has anyone enabled 'Free Space Reallocate' and 'Read Reallocate' in ONTAP 8.1.1 ?

I am wondering if you are using reallocation schedules, and if so, how about Free Space Reallocation.

Tomas

timo_puronen
10,878 Views

As TR-3929 states, the best practice is to enable them both. However, it's all about sequential reads/writes. I've disabled those options for random data volumes that have high IO since the performance went down. For random read/write volumes I've used scheduled reallocation (i.e. once a week) at non-peak hours. The outcome is satisfactory, regular scheduled reallocation helped me to reduce the dedupe cycle time.

System: FAS3240

ONTAP: 8.1.3P2

I hope it helps.

colin_graham
10,878 Views

I have enabled these options, by recommendation from support after running into issues with aggregate fragmentation and write performance (CPreads 4x higher than writes in aggr)

CPU usage is higher, as there are always redirect scans going on in the background, but performance overall is better, with the controller not choking with bursty write workloads. (although difficult to quantify as we upgraded to 8.1.2 from 8.0.2 at the same time)

I just enabled read-realloc for all vols in the aggr, but would be interested to hear  more on experience with being a bit more selective, would large NFS VM datastores count as random or sequential?

timo_puronen
10,878 Views

I would consider VM datastore as random. I tried to enable free_space_realloc for aggregate and read_realloc for VM volume but had to turn these off since there were constant performance issues. I would suggest scheduled physical reallocation for these volumes instead.

Best regards,

Timo

colin_graham
10,878 Views

What kind of performance issues did you run into? - We are running OK here (bar the higher CPU, although a 6210 has plenty to burn)

Thanks

TOMASYAWORKS
10,878 Views

Thanks Guys! This helps a lot. Especially by naming your enviroments.

We have 3240, 3220 and 3160 Netapp's. CPU < 60%

I will start free reallocation on our Cifs. Keeping my eye on this one.

Tomas

ASHWINPAWARTESL
10,878 Views

Nice to see this thread revived after being in coma for almost 6 months. Thanks Tomas.

Thanks all for sharing your experience. It is indeed helpful. On TR it is fine, but it helps a great deal when we get to hear from live production environments. So keep sharing your experiences.

Basically, for sequential read/write loads, enabling these (both) option makes sense, b'cos it will make way for more contiguous free space & read optimization as needed by the such workloads. However, for random write work loads, it does not help b'cos there is no concept of sequential write with WAFL (NetApp writes anywhere on the aggregate in an effort to reduce the write  to disk latency). But, ONTAP tries it best to look for FULL STRIPES for writes, hence scheduled reallocation can help create that contiguous free space required by WAFL FULL Stripes. I guess, it will not matter if there are plenty of free space on the storage array, but if the storage array is approaching saturation (reaching full space) and you have lots of random free spaces created over a period of time, then scheduled reallocation will certainly help I guess.

It is interesting because as data is written to the WAFL in a fragmented manner to gain WRITE performance, it can hurt you during sequential reads, and I guess that is where read_allocate can help optimize for sequential  read workload ?

This leads me to this analogy...

RANDOM/Seq WRITES = Already optimized due to WAFL architecture (NVRAM/MEM) but when disk saturation chips in, free_space reallocation can help improve write performance by avoiding back to back CPs.

RANDOM READ  = FLASH CACHE

SEQUENTIAL READ  = read_allocate

Thanks,

-Ashwin

madden
7,473 Views

Hi Ashwin,

Your analogy is fine except for sequential reads.  For these it is more nuanced and depends on how the data was written.  Read realloc helps the sequential read after random write use case.  Let me explain.  When data is written to WAFL it will coalesce these and write them to disk as efficiently as possible, retaining ordering of sequential writes in that cycle.  So for a pure sequential workload (writes and subsequent reads) these will be more or less sequential on disk.  But, if you have random writes (which WAFL handles very well due to the very same coalescing) and then do a sequential read of them they won't be sequential at the disk layer.  So read reallocate basically fixes this up and makes the physical order on disk match the logical order of the file/LUN from the client perspective.  If you use the read_realloc vol option the reallocate basically becomes on-demand, so when the system detects that a sequential read was issued and the data wasn't sequential on disk, then write it out again (which makes it a sequential write and next time a sequential read from disk layer too).  So enabling read_realloc will have a cost to re-write randomly written data to be sequential as you go, but only after it has observed the client actually wants to read the data sequentially, and with the data already in controller memory.  Compare this to a reallocate scheduled job where it will search each file/lun from start to finish and make the physical order on disk match the logical order of the file/lun.  This job will create more IO (it has to issue reads instead of leveraging the fact that the read data is already in memory) and it will reorder the entire file/LUN regardless if it is actually read sequentially.  Maybe that makes sense in some situations, for example if you have a DB batch job that runs on Friday and does sequential reads of randomly written data all week and you want it to run fast the 1st time, but otherwise it's going to be more efficient and targeted to use read_realloc vol option.  Especially if you have a LUN that has a mix of sequential and random workloads on it (think VMFS with many VMs) the read_realloc vol option is going to be much more efficient only re-writing data where it provides benefit.  Hope that helps!

Cheers,

Chris Madden

Storage Architect, NetApp

timo_puronen
10,878 Views

My customer experienced high CPU (100% constantly) load. sysstat -M didn't have 100% on all cores though but around 80 each core. And dedupe process in the end didn't finish in 24h. There was no significant performance impact to user data serving, but the customer was not able to monitor the subsystem over SNMP. After I reallocated luns/volumes, corrected some misalignment issues and went through dedupe/reallocation schedules the system behavior was back to normal (cpu < 50%, peaks@90%).

BR,

Timo

colin_graham
10,878 Views

Thanks for the reply - that makes more sense. the controllers treat the reallocate procedures as low priority tasks, along with dedupes/snmp - and throttles them accordingly to maintain good service serving data.

Our CPU (as per DFM/Sysstat) constantly shows 99/100% however sysstat -m or sysstat from powershell shows the true figure which is nearer 40-60% across 8 cores.

I believe the latest version of DFM actually does not monitor CPU at all by default as its no longer an accurate measure of showing system load in these modern multi-core systems.

timo_puronen
7,473 Views

Yep, that's correct.

One thing I forgot to mention was that when read_reallocate and reallocate_free_space were both enabled on volumes that contained VM datastores, there were some performance issues, or impact to user data. That's why I went the other way and chose scheduled reallocation instead.

BR,

Timo

chris_mckean
7,473 Views

Hi Guys,

      This is a great thread.  As Ashwin says it's all well and good reading TR's but sometimes you need to hear a real persons experience with implementing these suggestions from Netapp.  We currently have a 4 node 3250 cluster and are seeing performance issues.  We've got both HyperV and VMware volumes on this cluster.  We have free_space_realloc switched on for all our aggregates.

We have read_realloc  switched to space-optimized for all of our volumes.  Our VM and HYPERV volumes are all de-duped and I believe the space optimized feature of the read realloc is supposed to work with de-duped volumes.  How does everyone monitor the filers performance? 

I've found all of the netapp tools to be rubbish for monitoring performance.  We tried Oncommand Insight Perform and it looked great until we had an issue with filer performance and the leading people at Netapp couldn't tell us what went on with the tool.  It looks like it mis reported latency at the time of the issue.

Cheers

Chris

Public