2011-09-20 07:42 AM
Hi all - pardon the noob question.
so if I wanted to add another SAS disk shelf to our 6280 what else would I need to do? From what I understand since we have premier support netapp tech would come in and add the disk shelf etc but then after that what would I need to do perform and actually have the ability to write data to it..?
My initital thoughts were:
Assign the disks to the controllers half to controller A and the rest to the other so it would be balanced? What else do I need to do or in a nut shell that is it? Of course I wish to do this non disruptively.
p.s. all the disk shelves are the same (24) 600GB SAS drives per shelf)
hints tips and worthwhile advice appreciated..
thank you kindly in advance.
2011-09-21 06:28 AM
You will need to either create a new aggr with the disks, or add them to the existing ones.
aggr add aggr0 12 <-- add twelve disks to aggr0
aggr add aggr0 -d 2b.00.0 2b.00.1 2b.00.2 ... <-- add specific disks to aggr0, some places work very hard to have exacting disk layouts and would use this. However it gets destroyed after the first disk fails because ONTAP rebuilds from available spares and does not reset back to the original drive bay once the drive is replaced.
aggr create aggr10 12 <-- create a new aggr called aggr10, with the default raidgroup size.
aggr create aggr10 -d 2b.00.0 2b.00.1 2b.00.2 ... <-- same as above, but creating an aggr with specific disks.
There are many thoughts on raidgroup size and 16 is generally considered optimal, but unless you expect to add more disk soon, it really wouldn't matter. And you can change it at any time with
aggr options 'aggr_name' raidsize #
2011-09-21 06:39 AM
I should of mentioned that this would be added to an existing aggr. Let’s use aggr2 for example so the syntax would simply be:
aggr add aggr2 12 <-- add twelve disks to aggr0
aggr add aggr2 -d 2b.00.0 2b.00.1 2b.00.2
In an essence that simplistic or am I making it harder than it is ? ☺
Thanks for replying.
2011-09-21 07:41 AM
It really is that simple...but can I ask you to provide a "sysconfig -r"? You don't want to end up with a raid group smaller than 9 nominally, 6 at a bare minimum for performance reasons.
if you want to see what it is going to do, you can preview with:
aggr add aggr2 -n 12 and it will tell you what 12 disks it is going to add.
If you want more control over layout:
If you want a whole new raidgroup, using those twelve disks:
aggr add aggr2 -n -g new 12 <--- this allocates all 12 disks to a new raid group under aggr2, and previews what disks will be assigned to that raidgroup.
2011-09-21 07:51 AM
Attached you will see the results for sysconfig –r
We are still building out our infrastructure. We currently have our virtual environment (NFS) on aggr1 and a few physical server with luns as well. Aggr2 for a big SQL project.
2011-09-21 07:56 AM
I also wanted to clarify – we haven’t purchased the additional disk shelf yet (in process). So right now there is 25TB allocated to aggr2 but that will grow thus why the added shelf or shelves is required. Aggr1 is also 25TB.
2011-09-21 08:22 AM
Ok, you have big aggrs with big raid groups, so you will want to make sure that the disks are added to the existing raid groups, you don't want 3 with 20, and then one with 10!
I would check your "aggr options aggr2" and look at "raidsize", you are probably at 20, maybe 24. If you are at 24, then the new 12 disks should assign 4 disks to each existing raidgroup and level you out. HOWEVER, you will want to do a "wafl reallocate" on the volume/volumes for those aggregate to even the write layout. Otherwise you could end up with disk contention with too many ops running on specific drives.
If your raidsize is 20, you are going to want to set it to 24 when you get the new shelf. Typically you would want to create a new raid group, but in this case you will add 12, give 2 over to parity and be running with 10, that means this raid group will be over worked compared to the others.
Moving forward you are going to have big questions to ask. The max raidsize on a 3160 running 8.0.1 is 28, that means if you expand these again you only have one more bump before you reach your limit. Also, as you move forward, new raid groups are always going to be smaller, do you use mixed size rg's or do you create a new smaller aggr to work with? I would take with your NetApp rep or partner to discuss the best options for the future. Personally I would move toward another aggr, move some things around, and plan to make that one larger. Of course your needs could vary greatly from mine, and that may not work for you.
Here is the current 8.0 documentation about raid group sizing if you aren't familiar with trade-off/benefit statements.
2011-09-21 08:27 AM
Here is the result of the aggr2 options:
aggr options aggr2
nosnap=off, raidtype=raid_dp, raidsize=20, ignore_inconsistent=off,
snapmirrored=off, resyncsnaptime=60, fs_size_fixed=off,
snapshot_autodelete=on, lost_write_protect=on, ha_policy=cfo
so am I in trouble ? ☺
2011-09-21 08:37 AM
Not at all, when the shelf comes in, and you are ready to extend the aggr, run "aggr options aggr2 raidsize 24"
This will set the new raid group size to 24 and allow you to add all 12 disks to the existing aggr. Your pain point will come in the next time you need to expand. You are all set for this one, and even another 12 disks to that aggr.
So per the 801 limits:
you can have 150 raid groups per aggr, so you are good there.
you can have 28 FC/SAS disks per raid group, so you are good there, and if you add another 12 disks to the existing raid groups.
However, at that point you now have raid groups with 28 disks, and you can not acheive raid group parity with a shelf expansion. If you stop at 24 disks per raid group, you can buy a new shelf, add all 24 disks to a new raid group under aggr2 and ensure raid group parity, meaning you do not have three aggrs striping on 28 disks and one striping over 24. This is not extremely bad, however if you buy a shelf, and create a new raid group of 12 disks, and put it in an aggr with 28 disk raid groups, you have a performance penalty because you can't stripe evenly. I am not a NetApp engineer and as such cannot tell you how much of a penalty you would incur, so you need someone more able than myself to answer that moving forward.