ONTAP Discussions

2 controllers, 4 shelf each, 24 SAS drives each shelf, How many aggregates I should have?

netappmagic
15,213 Views

should I have a big aggregate each controller containing all all 4x[23-24] drives and multiple RG, or multiple aggregates?

it is on 2x3240 HA, 64-bit aggr, ontap 8.2.x,and for NAS shares. 3 drives from 3 shelv respectively have been allocated for aggr0 already.

600GB each raw

Thanks for your advice.

1 ACCEPTED SOLUTION

resqme914
15,160 Views

Assuming 7-mode...

I would have one big aggregate on each controller, which means I would expand aggr0 to become 4 raidgroups of rg size 23, raid-dp.  Reasons:

1.  Hate wasting 3 drives for just vol0, especially if the drives, which you didn't mention, are large-capacity (e.g. 900GB SAS).

2.  We use rg size 23 because we tend to add storage by shelves, and it's a lot simpler to grow the aggregates by entire shelves.  Plus the disk drives keep getting bigger and bigger, so it's just really easier to deal with entire shelves at a time.

3.  We like larger aggregates, and less number of aggregates.  Lots of spindles for performance.  We like to have one SAS aggregate and one SATA aggregate.  You don't have to deal with aggregates running out of space and you have to migrate a volume to another aggregate, etc etc.

4.  Less wasted disks drives.

We had NetApp consultants here for three months and this was one of their recommendations.  It took me a lot of work merging aggregates (one filer pair went from about 20 aggregates or so, down to 4) and we're really happy we did this.

View solution in original post

41 REPLIES 41

netappmagic
7,163 Views

What about the best pratice, the size of riad groups should be kept between 12-20? If we do 23, then it'd exceed the maximum number...

resqme914
7,163 Views

May I ask what TR gave you this "best practice"?  Especially after 64-bit aggregates became available, some of these guidelines have become obsolete.

In general, the choice of rg size is a compromise among cost, capacity, reconstruction time, and performance considerations.  We tend to add extra shelves of disks every year, and we found it very convenient to use 23 (except for SATA where max is 20) as the RG size.  I also personally hate wasting disks, so I lean towards the larger RG size.  We've had NetApp consultants onsite for three months, concentrating on "best practices" and performance... they did not complain about our rg size choice, although we did have some lively discussion on SATA RG size.

netappmagic
7,163 Views

http://www.netapp.com/us/media/tr-3437.pdf

this doc on page 11-12 kind of said the best would be 12-20, and also https://communities.netapp.com/message/102130

but, I feel the same that using the whole shelf for the raid group makes the sense. I also read threads,and suggested using the whole shelf as well.

resqme914
7,163 Views

The TR does not specifically state that raid group size 12-20 is best practice.  It just stated that reconstruction times for rg size 12-20 is as little as 6% increase.

My recommendation, based on experience, is still 23.  Your only other choice realistically is 15, which will give you 3 raid groups and 3 spares (total of 39 data disks).  I'd pick 23 over 15.

netappmagic
7,163 Views

Hi resqme914,

I have 2 filers here and both with 2 shelves, 24 disks each. Upon our conversation, I feel it makes sense to create a large aggreate "aggr0", and have root volume vol0 in the same aggr. Based on this idea, on each filer, I already created the aggr0 with 2 raid-dp raid groups, 23 disks each, and 2 spares.

Now people here doesn't this idea at all, and insist on separating root aggr, due to "best practice", and "performance is better", and these concerns are overweigh on the capacity. I may have to separate them. I don't have strong reason to defend idea because of lack of OnTape documents to support the idea.

Here is my question to you, based on what I have already done, what can I do to separate them. I am thinking that I can use these 2 spares, then the question is what type of OnTap raid groups that could have 2 disks only? If I can create the raid group using 2 spare disks, I can then create a aggr, say aggr1. I can use steps you provided above to move vol0 to this aggr1, then destroy the current aggr0, and then create a new aggr, say aggr2 with 3 disks of raid-dp, then move vol0 to aggr2.

Does that sound right? and what raid group that OnTap supports can have only 2 disks?  I have the other 2 filers, since there are 4 spares, so if this idea works, I could use the similar idea and more easily to seprate them.

I am sorry to continue on this thread again. But, really appreciate all your messages.

DHESSARROW
7,163 Views

The reason for separating the root aggr is not performance, it is so that if one data aggr goes offline it reduces the impact to other aggrs, however if you are going to have one large data aggr anyway then there is really no advantage in having a dedicated aggr.

6 extra spindles do not only equate to GB saved but also IOPS saved.

If you lose the debate then converting back is easy IF YOU DONT HAVE ANY DATA IN aggr0

You can make a raid4 aggr_root then run through "root vol move" steps above

If you need more spare disks you could just change the raid type of aggr0 to raid4

aggr options aggr0 raidtype raid4

netappmagic
7,074 Views

Hi David and resqme914,

I have two type of filers, filerA and filerB. We just started to use them, so DATA is not significant. I need to restore exactly what aggr0 was, 3 disks raid-dp for aggr0, root aggr.

filer A with one large aggr0 only now, 2xraid-dp, 23disks each, and total of 2 spares. This filer is for DR, and only one DR volume now, which can be removed and then re-create snapmirror later, I think.

I will do the following:

- create raid4 aggr_root on 2 spares.

- run through "root vol move" steps above, and reboot

- destroy aggr0, and will be releasing  all disks.

- use 3 disks to create a raid-dp aggr0, then run “root vol move” step above again to move root vol to this aggr0 from aggr_root. reboot

- destroy aggr_root

- create a separated aggr1 with 2xraid-dp, 21 disks each

- re-create Snapmirror on that DR volume from the primary.

filer B has 2 aggrs now, aggr0, aggr1, and 4 spares. aggr1 has data, but aggr0 has no real data yet, other than root volume. Agg0 has 80 disks, 4xraid-dpx20disks. Since I am not going to touch aggr1, so it hould be intact in the whole process, right?

The following would be steps:

-       create a raid-dp aggr_root by using 3 out of 4 spares

-       run through “root vol move” steps above and reboot

-       destroy the aggr0

-       create a separate aggr “aggr2” with total of 76disks, 4xraid-dpx19 disks, 5 spares now.

Please verify these steps, and correct me if anything wrong. Thank you!

resqme914
7,074 Views

I'm at Disneyworld so I have to be brief...  For filer A, just change the aggr0 raid type to raid4 and that should free up some disks so you can create a raid-dp root aggregate.  A bit of a shortcut from what you had planned.  Good luck.

netappmagic
7,074 Views

Thank you, and enjoy your vacation!

netappmagic
7,162 Views

Hi David,

With regards the debate, I said that to them that we could not only save the space, also increase the performance due to more spindles. However, in their opinion the best practice will be always separate them, and they don't care about the space since it is cheat in their words. I guess, I am losing the debate, because they need a written NetApp document to clearly state that a large aggr will be an option or doing better that is what we are lack of. They are just not convinced without docs.

netappmagic
7,072 Views

Hi David and resqme914,

Issues again: I am getting the following error when I am trying to change aggr0 from raid-dp to raid4:

> aggr options aggr0 raidtype raid4

aggr options: Can't revert a raid_dp aggregate to raid4 as it results

in 22 disks in the raid group, which exceeds the maximum

raid group size of 14 disks for a raid4 aggregate.

I have to free up some disks first, in order to move root volume to a new aggr with raid4, because I just found that I have only one spare left, the other spare got failed, I can not use one disk for raid4 aggr.

Currently, aggr0 has 2 raid group, and with 23 disks each.

Any alternatives?

resqme914
6,728 Views

Hmmm... ok, sorry I missed that.  Are filers A and B an HA pair?  Or are they completely separate, standalone filers?  If they're an HA pair, you can move a spare disk from filer B to filer A so you can create a raid-dp root aggr and not have to do it the long way around.

netappmagic
6,729 Views

Hi resqme914, Good to see you back.

filer A and B has it's own HA pair. The problem is filer A, since it has only one spare.

Now, let's say A's HA partner is AP, and AP has 2 spares. So, are you saying I can move 2 spares from AP to A, and make up a root aggr with raid-dp? How do I move these 2 to filer A? Would that effect the performance or seemed weird, since 3 disks in aggr0 will be across two filers?

resqme914
6,729 Views

You said you have two spares on filer A (and I'm assuming you have two spares on filer AP).  Borrow one spare from filer AP and move it to filer A so you can create your new raid-dp root aggr on filer A.  Once that's done, you said you'd delete aggr0 on filer A and that will free up a lot of disks.  So then you can reassign a disk from filer A to filer AP to give back the disk you borrowed (albeit it won't be the same exact disk).

To move a disk from filer AP to filer A:

filerAP>  disk assign -s unowned <disk-name>   (e.g. 0a.00.14)

then on filer A...

filerA>  disk assign <disk-name>

You can use these same commands in reverse to give a disk back to filer AP.

netappmagic
6,729 Views

Yes, both filer A and AP should have 2 spares, however, filer A now only have one since the other one got failed. Your steps should be still the same, instead of borrowing 1 spare from AP, I'd have to borrow 2.

My question is if these borrowed spares should be back to AP, from performance or future mangement perspective? or are they just logically owned by AP, and it doesn't matter which filer own them or using them? If yes, then I can return any disks once after I free up a lot of disks?

A lot of questions here, Thanks!

resqme914
6,729 Views

Two things... you should get the failed disk replaced, and I think Ontap will be very unhappy if you borrow two disks and leave filer AP with no spare disks.  If I recall correctly, it might even panic after 24 hours (not sure though).

Since you have modern hardware and filer A and AP are an HA pair, the disks are software-owned.  You can move the disks from one filer to another using the commands I gave you.  You should return the borrowed spare disks right away after you free up the disks.

Also, I would strongly recommend that you take some NetApp training.  Look into the NCDA bootcamp.  You will learn a lot.

netappmagic
6,729 Views

Hi resqme914,

I have get 2 spares back on filer A, so, just wanted to make sure steps with you one more time. I'd prefer to keep disks ownship as is, so, not going to assign disks from one filer to the other.

- create raid4 aggr_root on 2 spares.

- run through "root vol move" steps above, and reboot

- destroy aggr0, and will be releasing  all disks.

- run aggr options aggr_root raidtype raid_dp

- change aggr_root to aggr0

resqme914
6,729 Views

Looks fine at a high level.

When you create the raid4 aggregate on filer A, the filer will complain that it has no spares left so get the work done quickly.

When you change the new aggr to raid type raid_dp, the console will say a spare drive is missing and it will automatically add a spare disk and reconstruct.

Just wanted you to be aware of those.

netappmagic
6,729 Views

Thank you for you patience again!

Questions on your following comment, what is the pain part in following steps:

Pain in the neck steps…

  1. Recreate new CIFS shares (C$, ETC$, HOME) using same characteristics as old shares
  2. Delete old CIFS shares
  3. Delete old root volume and aggregate once satisfied with results.
  4. Rename aggr1 to aggr0 (if desired).

are these characteristics you are talking about?

> cifs shares

Name         Mount Point                       Description

----         -----------                       -----------

ETC$         /etc                              Remote Administration

                        BUILTIN\Administrators / Full Control

HOME         /vol/vol0/home                    Default Share

                        everyone / Full Control

C$           /                                 Remote Administration

                        BUILTIN\Administrators / Full Control

resqme914
6,414 Views

Since your filers aren't in production yet, there probably isn't a lot of shares and exports so it won't be much of a pain for you.  Yes, I am talking about re-creating cifs shares and nfs exports.  Print them out and then recreate them after moving the root volume.

netappmagic
6,414 Views

I come here again, just wanted to thank you for one more time, resqme914! All done!

Public