ONTAP Discussions
ONTAP Discussions
Hello everyone,
I want to setup a checklist of steps to ensure that space usage and performance is optimized with snapmirrored volumes. Based on the various threads on here (https://communities.netapp.com/thread/6530) and other blog posts, can anyone confirm that this is the right process?
for new volumes:
for existing volumes:
My main worry is whether to run the deduplication before or after the reallocation(my understanding is that the dedup just "messes up" everything). Both jobs are scheduled to run for every day.
Thank you in advance for your help!
Hi,
I am not quite sure how beneficial it is to run reallocate on newly created volumes - unless the underlying aggregate has been just expanded with few extra disks (still unsure if it makes sense though).
With regards to dedupe & reallocate - freshly introduced in ONTAP 8.1, and again, not sure whether it could make any good (https://communities.netapp.com/message/85019#85019)
Regards,
Radek
Hello Radek,
Yes, it would be in that kind of scenario. Newly added disks into an aggregate. I had seen your answer in that thread, and I was wondering if it was too much of a new feature to be considered yet.
Regards,
It's not about that it is just new - I simply don't understand how reallocate algorithm will optimise block layout in a deduped volume where the same block can potentially belong to multiple files.
I was hoping someone from NetApp engineering team will explain it us.
I only useful information I can think of lies in the fieldportal which only netapp employees and reseller have access to. If I remember correctly, a user had pointed out a link
I haven't found anything specific (yet) on Field Portal re dedupe & reallocate.
As per the other thread you mentioned, reallocate is rather poorly documented!
Actually there is just one paragraph in the TR-3929 re this:
"Starting in Data ONTAP 8.1 deduplicated data can be reallocated using physical reallocation or read_realloc space_optimized. Although data may be shared by multiple files when deduplicated, reallocate uses an intelligent algorithm to only reallocate the data the first time a shared block is encountered."
So if I read this correctly, if a block is shared between multiple files, the "first" file (whatever method is used to define this) wins & the block gets moved to optimise this one file (and possibly de-optimise another files??)
I don’t think it de-optimizes the file seeing as the file would’ve been already deduped and it only retains one reference block (please don’t cringe while reading! Finding hard to explain it) because of the dedup freeing up the similar blocks. In this case, the physical reallocation may not even do much/anything???
I’m going to try drawing it