2010-01-30 08:47 PM
I'm a little confused here on where it's approtate to use this
When running some the reallocation on an aggr, I've noticed that it goes though each volume and does what I believe to be the volume version of the reallocation command. Is this true? If I set a schedule to kick off a a weekly reallocation on an aggr it will blanket the relayout of blocks first at hte free space at the aggr then block optimize each volume?
Or should I be setting up schedule on a per volume level? there are a handful of volumes that I think could use some daily optimizations. Some may even work well with read_realloc as well
Where I officaly got confused was this with the reallocate command. This "note" doesn't seem to be in any of the pdf/online docs and contradicts them.
reallocate start -A [-o] [-i interval] <aggr_name>
NOTE: -A is for aggregate (freespace) reallocation.
Do NOT use -A after growing an aggregate if you wish to
optimize the layout of existing data; instead use
reallocate start -f /vol/<volname>
for each volume in the aggregate.
Looking at the volume status' I see "online,raid_dp,redirect,active_redirect" for each volume. This would lead me to believe that it's doing "everthing"
DataOnTap 18.104.22.168 (and 7.3.3RC1 on 2 array's soon to be deployed about when 7.3.3 is hopfully GA)
2010-01-30 08:49 PM
quick edit here. More observations that lead me to beleive that doing the aggr level reallocation hits up volumes. This is started after the block level stuff is done.
State: Redirect: 0 of 3 volume(s) processed.
2010-02-03 05:53 AM
Keep in mind that the reallocation of data at the AGGR level will affect the volume data as well, in the aspect of moving data blocks around.
You can see that in this example taken from the NOW site.
"Defining an aggregate reallocation scan
After reallocation is enabled on your storage system, you define a reallocation scan for the aggregate on which you want to perform a reallocation scan.
Because blocks in an aggregate Snapshot copy will not be reallocated, consider deleting aggregate Snapshot copies before performing aggregate reallocation to allow the reallocation to perform better.
Volumes in an aggregate on which aggregate reallocation has started but has not successfully completed will have the active_redirect status. Read performance of such volumes may be degraded until aggregate reallocation has successfully completed. Volumes in an aggregate that has previously undergone aggregate reallocation have the redirect status. For more information, see the na_vol(1) man page."
You can get more details from this page.
Therefore, when you do an AGGR reallocate, volumes will be processed as well.
Keep in mind that a volume is a virtual slice of a physical AGGR. Much like a Qtree is to a volume.
2010-02-03 07:15 AM
It seems every doc I read on NOW, the information is a bit different.
From your link:
You can define only one reallocation scan per file, LUN, volume, or aggregate. You can, however, define reallocation scans for both the aggregate (to optimize free space layout) and the volumes in the same aggregate (to optimize data layout).
So, it looks like i need to do both here.
What about read_realloc volume option. I understand it to be a more "how the app uses the data" kinda of optimizion with the penality of a little CPU and IO overhead on reads (then writes). Have you had much sucess with measuring a gain from this option? Would running volume level reallocate "undo" changes made by read_realloc if both were executed on the same volume?
Also, one last question:
We use flex vols for a few oracle databases. These flex vols are tied out a few levels from the main production DB. Should we be running the reallocate commnads on flex vols? Some times we can go 6 months with plenty of data changes between refreshes of snapshot'ed DB's.
2010-02-03 07:43 AM
I am trying to follow this thread, as reallocation seems to be quite interesting subject.
It seems this is like a skeleton in the cupboard - experience from the field proves it is needed, yet it isn't very 'politically correct' to openly say that actually WAFL could suffer from fragmentation.
I always took a stance that when dealing with technology it's better to be honest & know the issues up front to mitigate possible negative impact.
A solid TR document gathering together all relevant info around fragmentation & reallocation is a must in my opinion (someone at least seems to share this view: http://communities.netapp.com/message/20969#20969)
2010-02-03 08:00 AM
Well, I'm going to kick it off on a few Luns. I'm not sure why we need to do it on a lun if you just include the volume. Does it skip luns in a volume? this kinda stuff with "why" questions are not answered.
I'm going to compose some of this data to my SE today. I'll report back.
2010-02-03 08:06 AM
After 10 years of working on NetApp gear, I have to agree on this statement.
>>It seems every doc I read on NOW, the information is a bit different.
That is a consistent problem I have.
I find the best information is ascertained by testing and personal experience.
My work with flex vols is very limited (two years) and I am currently testing reallocate on these types of volumes.
In my previous three years I performed WAFL reallocate on Traditional Volumes, which would be similar to current AGGR's and I found some (20%) performance improvement for backups and read performance by doing this.
Keep in mind, WAFL reallocate is a Defrag type of thing and when you perform an operation on a volume with this type of command, you will risk data loss (unlikely), but possible.
2010-02-03 08:07 AM
I'm not sure why we need to do it on a lun if you just include the volume. Does it skip luns in a volume?
Nope, I don't think LUNs get skipped - you can either reallocate all blocks within volume (including these in LUNs), or just blocks within a given LUN.
Also - physical reallocate is worth consideration, as it shouldn't cause your existing snapshots to grow:
2010-02-03 08:17 AM
so if -f and -p the same function in regards to a volume/lun/file?
The problem with testing is time here. I'd like to understand what I'm testing before i'm testing the real stuff. We are maybe 2 weeks out before we get our DB's on the new storage. The nice thing about oracle 11.x is the replay feature. So I can get consistant DB runs on the data over and over and over and tun the filer.