Accepted Solution

SMSQL volume & LUN sizing question?

Hi Guys,

I've read the admin guide and lots of forums posts about this, but I am still a little confused.

I am using SnapManagerSQL 5 with an SQL2005 server. A few weeks ago I suffered a disaster because we (not me!) had configured both the databases & logs on the same LUN. Somehow the disk got corrupted, and killed both our logs and databases (and guess what? - the backup had failed the night before due to incorrect volume sizing!).

So now we are about to recreate all the LUNs & volumes and configure SMSQL correctly with daily backups and at hopefully transaction log backups every 30mins.

My problem is that the Exchange SM guide was great at explaining the different supported configurations, and formula for calculating LUN & volume sizes - but the SQL guide is not. Therefore I need some help here.

I plan on configuring it following the "multiple databases on 2 LUNs configuration" outlined in the admin guide:

- 1 vol & LUN for our databases.

- 1 vol & LUN for our log files.

- 1 vol & LUN for snapinfo

(Note: this seems the main difference between SME and SMSQL - SME recommends putting your logs & snapinfo on the same LUN, where as for SQL you seperate them....).


- I have 5 databases, in total approx 8 Gb data.

- Total log file size for all databases is approx 4Gb (I need to shrink some log files...).

- Database growth, and log file growth is relatively small. Our largest database is only 65% utilised and has not grown in file size in the last 2 weeks.

- The log file for our largest database stays around 1Gb in size (unless we miss a backup for 1 week in which case we've seen it grow to around 10Gb).

Now based on this information, can anyone tell me how I should go about sizing my LUNs and volumes other than "volume is 2x LUN size plus snapshot space"??

Any suggestions or help would be much appreciated!



Re: SMSQL volume & LUN sizing question?

I don't really have a lot of insights regards to sizing, but you can obviously grow your volumes (and your LUNs through SnapDrive) on the fly. For some stuff we use autogrow on the volumes where we can't predict the snapshot sizes.



Re: SMSQL volume & LUN sizing question?

Hi Marc,

I might just expand a little on the features Richard mentioned cause they are choice.

Regarding your LUN sizes, it sounds like you've pretty much got them nailed and planned for the growth of you DBs/Logs.  As Richard said, you can grow LUNs on the fly using Snapdrive - because of this feature I usually try not to overcommit to much space for a LUN as you can only grow it, not shrink - plan the LUN size as much as you can, then grow it if required with SD - unless you're using windows2008, as you can also shrink LUNs with SD.

The volume size is where you can start to apply formula's for data size, you mentioned people have been pushing the old (LUN size x 2) + 20% - that is from the olden days, back when people rode bikes with a big front wheel and little back wheel - yes its the safest way, but who wants 120% of unused data 'just in case' - actually lots of people still want that, its up to you how you want to manage space vs your admin time.

Apply some local knowledge around how much you think the ROC will be, how many snapshots you wish to hold onto etc etc so it may look something like this:

- 12GB LUN to host the 8GB of DBs you currently have

- rate of change across your DBs is 5% per day

- you want to hold onto 2 weeks or 10 days worth of snapshots (10xROC)

It will look something like this:

- 12000MB x 5% = 600MB - ROC

- 600MB x 10 daily snapshots = 6000MB

- Volume size = 12000+6000 = 18000MB or 18GB

Create your 18GB volume and amend the fractional_reserve as required but make sure you use volume options auto-grow and/or snapshot auto-delete. 

This means that if your ROC goes mental and the volume is going to run out of space volume auto-grow will automagically increase by the amount specified - you can even set a ceiling on how big the volume is allowed to grow to.  Example of the command:

vol autosize vol_name [-m size] [-I size] on

-m size is the maximum size to which the volume will grow. Specify a size in k (KB), m (MB), g (GB) or t (TB).

-I size is the increment by which the volume's size increases. Specify a size in k (KB), m (MB), g (GB) or t (TB)

You can also use snapshot auto-delete, if a volume hits a certain size it will automatically delete either the oldest or newest snapshots.  If you use both of these commands you should get the best use of space from your filer.  Just be careful that you limit volumes to certain sizes and keep an eye across your aggregate/s - don't write cheques your aggregate can't cash!!

The beauty of these commands is even if you don't have a clue about ROC you can chuck your LUN in a volume only 20% larger and let it work itself out and grow as required.

This method may not be for everyone, but have a play around with those settings and see if it works for you.


Re: SMSQL volume & LUN sizing question?


I think I need to change my NetApp consultants!! Yeah, I had the feeling that the "x2 + 20%" thing wasn't the best way for us to manage our space - and I was getting frustrated at the amount of reserved space I had on my aggregrates & couldn't use. Imagine, we've been folllowing this rule for our Exchange & SQL servers so I've not got TBs of data, but I am using TBs of capacity to store it! I see your point that it's "easier to manage" this way, but I'd rather spend some more time on data management but make full use of my filer rather than filling it up because we've wrongly sized all of our volumes!

Ross, really thank you for that great answer. If I could aware double-points I would. I've been bashing my head on this for ages now trying to work volume sizes based on our LUNs and snapshots. So it's much appreciated!!

I think it would be fair to assume that this method could also apply to our Exchange server LUNs/Vols? As long as we are careful & ensure we've enabled auto-grow and auto-delete old snapshots, I gues there's no danger that our systems ever crash due there being not enough space (as long as we have space on the aggregate)?

What I really like, is that as you point out we don't always know the ROC for our LUNs - so allowing it to autogrow and letting the vol calculate it's own size fits us much better.

Big front wheels indeed ;-)

Cheers Ross,


Re: SMSQL volume & LUN sizing question?

LOL - don't know about Genius, but thanks!

Sure, you can apply it to datasets where you see fit.  Some people won't apply it on systems such as Exchange, only because it is usually seen as being so important and they just won't go there.  I would start off with less important datasets and move up from there.

Yes, you're correct if auto-grow/snapshot auto-delete are enabled and you always have space avaliable in the aggregate you won't crash.  The only thing to watch out for is if you have multiple volumes using this technique you need to make sure to put ceilings on the volumes so they can only auto-grow to a certain size, just in-case something crazy happens - like someone runs an Exchange defrag, the next snapshot will be HUUUGE due to the block all moving around, if it happens again your volume will grow like mental - in scenarios like that you're better off letting snapshot autodelete kickin, or just let a snapshot fail and alert.

So you can see if you don't have a ceiling and all your volumes continue to auto-grow you'll soon run out of aggregate space and then BANG!

Are you using Operations Manager?  You can use that to alert you when volumes reach certain levels/auto-grow/auto-delete etc etc, helps you keep an eye on things.



Re: SMSQL volume & LUN sizing question?

Hi again Ross,

Sorry for the late reply but I seem to have trouble accessing these forums from my computer (Firefox & IE7)  - everytime I tried to login it would freeze!! lol

Never mind.

To answer your quetion, no I am not currently using Operations Manager. If I had known we'd have so many volumes & the defulault OnTap tools offer no such reporting/alerts then I definately would have purchased Ops Manager from the beginning. I think now though we are going to see if we can setup alerting on volumes sizes etc through our Orion software (SNMP).

You also made an interesting point regarding disk defrags - we had an issue because of this last month. We suffered a corrupt LUN (Windows couldn't read it) & my database admin ran a Windows disk-check on it - which I assume modifies all disk blocks - and suddenly our volume reported full and we had to increase it by 50% or something crazy. As the amount of data on the LUN hadn't increased, I couldn't understand why the volume was suddently full - I guess it was for the same reasons.

I have to say it's all very interesting - the OnTap software makes the NetApp's "easy to manage" on a very basic level, but to do the more advanced things requires a hell of a lot of research and learning (the SMSQL/Exchange guides are 500+ pages long, and I still have to come onto the forums to get a decent explanation of how to configure my server!).

Your help is much appreciated and interestingly, following your advice I've been able to more realistically size my Exchange/SQL volumes (this 120% overhead really wasn't necessary for our SS policy it seemed).

Cheers Ross,


Re: SMSQL volume & LUN sizing question?

I agree with the sizing comments above by collecting ROC, leveraging autogrow, and sizing appropriately.

I would only make one comment - regarding snapshot autodelete.  With a SnapManager product - you do not want to use the filers snap autodelete functionality as it will cause the SnapManager product to lose sync with the snapshots in its catalog.

Newer SnapManager versions (SME 5.X and SMSQL 5.X) have implemented their own version of snap autodelete.  They are a bit different in implementation as they MUST have a fractional reservation set above 0.  They work with a setting to monitor usage in frac reserve and will start deleting snapshots upon a set amount of frac reserve usage.  Keep in mind that fractional reserve space is a last resort space when the volume has no other space for the LUN to write.  A 25% fractional reserve means if 100GB's has been written in the LUN, then 25GB of space will be reserved from the volume for the LUN to write into.  You can see this with a df -r command.

The SnapManager "Backup Autodelete Trigger" works off of a percentage of fractional reserve usage.  In the case above, if you set the Backup Autodelete Trigger to 10% of fractional reservation, when the volume is completely full, and the LUN needs to overwrite existing blocks, it would begin writing into frac reserve space - when 2.5GB has been written (10% of 25GB), SnapManager would begin deleting older snapshots to free up space.

From the SnapManager for SQL 5.0 Admin guide:

Page 470:

If the Data ONTAP Snapshot copy autodelete policy is enabled for a volume that

stores SnapManager backup set components, either disable the policy or

configure it so that it does not delete the SnapManager backup set components.

Page 475:

Backup Autodelete Trigger (%):

A SnapManager fractional space reservation policy setting for the storage system volume that contains the LUN. The percentage of overwrite reserve utilization that triggers automatic deletion of SQL Server backup sets

Snapshot Autodelete:

For the storage system volume that contains this LUN,

the state of the Data ONTAP Snapshot copy autodeletion feature: enabled or

disabled. If this LUN stores SQL Server data files and is contained in a storage

system volume for which the Data ONTAP Snapshot copy autodeletion feature is

enabled, disable this feature on that volume or ensure that it is configured so that

it will not delete SnapManager backup set components.


Re: SMSQL volume & LUN sizing question?

Hi Don,

Thanks for adding to this post - it's very relevant input and you're right. We are using the latest version of SnapManager and I have seen these "FR policy settings" in the program & wasn't sure how they work with the OnTap auto-delete policies etc.

You guys have been a great help & I will put this new found knowledge into action soon!


A very grateful Marc