ONTAP Discussions

Best practice on creating aggr?

peter1965
5,680 Views

Hello all I was curious what is the best practice on creating aggr ? I know this seems like a straight forward and perhaps vague question.

But I know aggr0 shouldnt be touched do most folks create a huge aggr after the fact and carve out the volumes? Or do you say create an aggr for NFS, create and aggr for iSCSI.

5 REPLIES 5

chrisatnav
5,680 Views

I'd bet that most people don't deviate too far from the standard rules: Aggregates must use disks of same type, speed, and size, until you hit the aggregate size limit.

But you could have aggregates that are isolated to single shelves, multiple aggregates with different aggregate snapshot schedules, or a mix of mirrored and non-mirrored aggregates depending on a data protection policy, or seperate aggregates just because you need to prevent two datasets from co-mingling on the same disk.

shaunjurr
5,680 Views

Hi,

How you setup your aggregates is largely a result of knowing your data.  What sort of access patterns (user, application, database, vmware) will there be?  How will the data be backed up?  What sort of growth is expected?

You will get the most from the system with a good balance of all of these across all of your aggregates.  Storage challenges are really starting to gravitate towards I/O rather than GB.  Disk sizes are increasing but the I/O each disk can produce is largely remaining the same.  This might work pretty well for unstructured user data (except for direct backups from primary storage), but it can be problematic for I/O intensive applications. Disk sizes also make it harder to put together enough disks for sufficient I/O before maximum aggregate size is reached. 64-bit aggregates are only part of the answer as they also require more system resources to manage them. Using cache memory to short-cut disk access for more frequently used data is basically the reasoning behind the development of PAM modules.

Grouping data of different access patterns on larger aggregates with flexshare prioritization is basically how it is supposed to work best.

There are probably no perfect setups unless you have a crystal ball or unlimited budgets.  In the real world, the ability to react when unexpected performance problems show up is essential.  Monitor performance with your own tools or with NetApp's tools.  Knowing how raidgroup sizes, disk types, snapmirror, deduplication, reallocation, backup, and rogue applications can affect performance are useful knowledge points to have.  A healthy backround in Ethernet and TCP/IP, as well as Fibre Channel will come in handy as well.

Best Practices can be very academic compared to the real world constraints of time and money.  Making it all work is more often than not, a matter of experience and personal motivation.

peter1965
5,680 Views

Thanks for the reply. Keep in mind I am a san noob and also a noob to netapp as well.

We have 2 6280 controllers with 24 trays filled with 24 600gb drives and several pam modules (1tb)

I suppose if someon wanted 50tb of space for sql db - just create the aggr with 50ish tb of space with dp and let him have at it or do I create a much larger one for specific sql only use and when someone needs cifs I just create a aggr called cifs for example and then create the volumes.

Pardon the ignorance but I am just trying to learn and make certain I do things right. Anyhow I am sure half the people on here are as well

shaunjurr
5,680 Views

Hi,

Like I previously wrote, the best case is to mix different access pattern types and prioritize according to SLA/customer expectation.

As you already have burned up a few million on the 6280's, a little more money for some basic training might also be an idea.

Puting a 50TB sql database anywhere would scare me, but I would highly suggest that you read all of the NetApp Best Practices papers that you can on SQL before doing anything that large.  A traditional backup of a database that size would be prohibitive.  Setups using SMSQL and snapmirror/snapvault could get you a lot farther.  You should understand a good deal about mountpoint disks on windows and allocating a number of luns for such a job.  I often split log and database luns between the two head to assure maximum IO from my investment.

You have the "ferrari" from NetApp, but you can easily make it as slow as a tractor if you don't know the system.

peter1965
5,680 Views

Thanks. Yes training is planned I am just asking. I should of been clearer 50 tb is for sql databases (plural) not a single 50tb database.

Yes we have the ferrari and plkan to use it for cifs (lot of snap mirroring) then offload that to a v6210 and we plan touse it for virtual and as mentions some sql and who knows what later down the line.

Public