Active IQ Unified Manager Discussions

Yet another reallocate Q

rkaramchedu1
4,198 Views

Greetings!

Threads already referenced on this topic:

https://communities.netapp.com/message/49600#49600

https://communities.netapp.com/message/20969

http://now.netapp.com/NOW/knowledge/docs/ontap/rel80/html/ontap/cmdref/man1/na_reallocate.1.htm

I have a volume that yields the following upon a "reallocate measure":

      [afiler: wafl.reallocate.check.highAdvise:info]: Allocation check on '/vol/dbvol' is 23, hotspot 0 (threshold 4), consider running reallocate.


Looking to better understand the following concepts

  • Allocation Check - What does an allocation check of "23" mean?
  • thresholds - Same as above. 4 out of what? is this abs or relative measure
  • hotspot - Same as above
  • How long should a actual reallocation take (is it comparable to the time taken to complete the "reallocate measure" command)?
  • What's the CPU overhead? What factors does this cpu overhead depend on?

Thanks.

2 REPLIES 2

Darkstar
4,198 Views

The number does not mean anything specific. It's simply "the higher the worse". 23 is pretty darn high so I'd strongly suggest you run a realloc on that volume. I've seen customers with a value of 13 or so, who, after doing a realloc, observed a 2x speedup in their DB access times.

The threshold is always 4, this is the point where the filer suggests duing a reallocate. i.e.1 to 3 is good, 4 to (unlimited) is bad

Hotspots tells you if you have hotspot disks or not (i.e. your data is not distributed equally across all data disks in the raid groups or not). 0 is okay in that regard (everything >4 is, again, a reason to start doing reallocs)

I wouldn't worry about the time it takes or the CPU, both is not really a problem (realloc runs in the background so other processes always get priority).

-Michael

aborzenkov
4,198 Views
The number does not mean anything specific.

Not quite. NetApp tries to lay out consequtive data sequentially in clusters (for lack of better word). Last documented cluster size I have seen was 128K (32 NetApp blocks), but it could have been increased in recent versions. So allocation check is basically ratio of average cluster size to target optimal. Assuming target 128K, 23 means that on average sequential cluster size is no more that 1.3 blocks, i.e. all data is heavily fragmented.

The threshold is always 4

It is default, but it can be set in every "reallocate" invocation.

I'd love to know how hotspots are computed, right now it is really pretty meaningless number.

Public