2013-06-03 04:52 AM
My understanding is that Filers with good dedupe savings, will cause 'fragmentation' in data layout and will eventuall lead to:
1. slow sequential reads
2. eventually cause high disk utilization which in turn may cause latency
3. causes non-contigous free space, which will affect writes.
In that case, following option in 8.1.1 seems attractive.
filer>aggr options aggr_name free_space_realloc on
fller>vol options vol_name read_realloc space_optimized
Note: space_optimized option is synonymous with the physical reallocation method.
Accodring to TR-3929 (Reallocate Best Practices) -These two are complementary technologies that help maintain optimal layout. Read reallocate will optimize the system for sequential reads on the fly, while
free space reallocate will optimize for writes.
But, Read reallocate is a volume option that performs opportunistic reallocation on data to improve read performance. So I believe it again depends on the read/write ratio of your applcation, therefore you need to selective before enabling this option on a particular volume ?
Any thoughts folks?
2013-11-28 04:58 AM
Mmmm 182 views, no answers
Let me ask this question again: Has anyone enabled 'Free Space Reallocate' and 'Read Reallocate' in ONTAP 8.1.1 ?
I am wondering if you are using reallocation schedules, and if so, how about Free Space Reallocation.
2013-11-29 12:15 AM
As TR-3929 states, the best practice is to enable them both. However, it's all about sequential reads/writes. I've disabled those options for random data volumes that have high IO since the performance went down. For random read/write volumes I've used scheduled reallocation (i.e. once a week) at non-peak hours. The outcome is satisfactory, regular scheduled reallocation helped me to reduce the dedupe cycle time.
I hope it helps.
2013-11-29 02:06 AM
I have enabled these options, by recommendation from support after running into issues with aggregate fragmentation and write performance (CPreads 4x higher than writes in aggr)
CPU usage is higher, as there are always redirect scans going on in the background, but performance overall is better, with the controller not choking with bursty write workloads. (although difficult to quantify as we upgraded to 8.1.2 from 8.0.2 at the same time)
2013-11-29 02:52 AM
I would consider VM datastore as random. I tried to enable free_space_realloc for aggregate and read_realloc for VM volume but had to turn these off since there were constant performance issues. I would suggest scheduled physical reallocation for these volumes instead.
2013-11-29 03:21 AM
What kind of performance issues did you run into? - We are running OK here (bar the higher CPU, although a 6210 has plenty to burn)
2013-11-29 03:26 AM
Thanks Guys! This helps a lot. Especially by naming your enviroments.
We have 3240, 3220 and 3160 Netapp's. CPU < 60%
I will start free reallocation on our Cifs. Keeping my eye on this one.
2013-11-29 04:08 AM
Nice to see this thread revived after being in coma for almost 6 months. Thanks Tomas.
Thanks all for sharing your experience. It is indeed helpful. On TR it is fine, but it helps a great deal when we get to hear from live production environments. So keep sharing your experiences.
Basically, for sequential read/write loads, enabling these (both) option makes sense, b'cos it will make way for more contiguous free space & read optimization as needed by the such workloads. However, for random write work loads, it does not help b'cos there is no concept of sequential write with WAFL (NetApp writes anywhere on the aggregate in an effort to reduce the write to disk latency). But, ONTAP tries it best to look for FULL STRIPES for writes, hence scheduled reallocation can help create that contiguous free space required by WAFL FULL Stripes. I guess, it will not matter if there are plenty of free space on the storage array, but if the storage array is approaching saturation (reaching full space) and you have lots of random free spaces created over a period of time, then scheduled reallocation will certainly help I guess.
It is interesting because as data is written to the WAFL in a fragmented manner to gain WRITE performance, it can hurt you during sequential reads, and I guess that is where read_allocate can help optimize for sequential read workload ?
This leads me to this analogy...
RANDOM/Seq WRITES = Already optimized due to WAFL architecture (NVRAM/MEM) but when disk saturation chips in, free_space reallocation can help improve write performance by avoiding back to back CPs.
RANDOM READ = FLASH CACHE
SEQUENTIAL READ = read_allocate
2013-11-29 05:44 AM
My customer experienced high CPU (100% constantly) load. sysstat -M didn't have 100% on all cores though but around 80 each core. And dedupe process in the end didn't finish in 24h. There was no significant performance impact to user data serving, but the customer was not able to monitor the subsystem over SNMP. After I reallocated luns/volumes, corrected some misalignment issues and went through dedupe/reallocation schedules the system behavior was back to normal (cpu < 50%, peaks@90%).
2013-11-29 07:54 AM
Thanks for the reply - that makes more sense. the controllers treat the reallocate procedures as low priority tasks, along with dedupes/snmp - and throttles them accordingly to maintain good service serving data.
Our CPU (as per DFM/Sysstat) constantly shows 99/100% however sysstat -m or sysstat from powershell shows the true figure which is nearer 40-60% across 8 cores.
I believe the latest version of DFM actually does not monitor CPU at all by default as its no longer an accurate measure of showing system load in these modern multi-core systems.