2009-09-27 09:14 PM - edited 2015-12-18 01:39 AM
When we start talking about 2x + delta on the data LUNs and then sizing of the snapinfo LUN for point in time restores, space starts adding up really quickly.
Are there any best practices around thin provisioning with SME/SMOSS? (whether it be vol auto-grow/snap auto-delete, fractional reserve, etc.)
Most of my concerns around thin provisioning revolve around the potential complexity in environments where there's not a dedicated storage administrator (and often not even a partially dedicated storage administrator).
Practical experience/what's worked well for you in the field in smaller shops (i.e. places where the channel sells ) would be very helpful.
Solved! SEE THE SOLUTION
2009-09-28 06:05 PM
You are correct, using the 2x+delta method and 100% Fractional reserve will definitely cause the amount of space needed to add up very quickly. I'll answer this question with respect to Exchange, but note that the methodology can be used for MOSS or SQL server as well.
We have done a lot of work recently in regards to sizing and the 2x+delta methodology, especially with Exchange. The concept of volume sizing can be broken down into 2 different parts, the database volume and the transaction log volume. Just a note, the total LUN sizes required is used as a basis for volume sizing.
NetApp recommends that when sizing Exchange volumes the Microsoft sizing spreadsheet for Exchange first be used in order to determine appropriate amount of space needed for each LUN. Total LUN Requirements are used as the basis for calculating volume space requirements. Once we have the basic LUN capacity requirements we can easily size each volume using the x+delta method which is much closer to the capacity needs.
Transaction Log Volume Sizing
Providing accurate sizing for transaction log volumes depends on the following factors.
Transaction log volume sizing can be calculated using the following formula.
Transaction Log Volume size = Total Transaction Log LUN size+ (snapshot copy space * Online Backup Retention Duration + Fault Tolerance Window)
Database Volume Sizing
Providing that an accurate change rate is known the following formula would be used in calculating total volume size. The formula below is based on 2 different key variables.
To calculate the Exchange DB volume size a number of variable are used:
Database volume size can be calculated using the following formula
Database Volume Size = (Sum of the database LUN sizes that will share the database volume) + ((Fault Tolerance Window + Online Backup Retention Duration) * database daily change rate)
Now that we've covered the basic volume sizing let's tackle the question of Fractional Reserve. Historically this has been set at 100% which essentially means you'll wind up with a lot of headroom in the volume. The preferred method is to set fractional reserve to 0% and use the auto delete. So what exactly does auto delete do? When a low volume threshold is triggered, auto delete will delete snapshots to reclaim space until the target free space percentage is hit. I often get the question "why is it recommended to use auto delete? Why can't I just use auto grow to grow my volume when the low space threshold is hit?" To answer the question, you can use auto grow but in order to guarantee space auto delete must be used as well. The reason for this is that there may be cases when auto grow can no longer grow a volume, such as a low space condition in the aggregate. That's why the recommendation is to use auto delete. I will reply more in depth as to the recommended settings when using fractional reserve to 0 in a later post.
Thin provisioning in and Exchange or other application environment is completely fine. The trick to ensuring that you don't run into capacity problems which could cause the application to go offline is monitoring. If you are going to thin provision storage you need to make sure the you monitor volume capacity religiously. In environments where there is no dedicated storage admin or volume space is not monitored I typically recommend not to thin provision.
I hope this helps!
2009-09-29 08:53 PM
Good Evening Brad and Andrew - To Thin Provision or Not To Thin Provision, that is the million dollar question in the reseller space these days...
A lot of my decision really comes down to if I think the customer can handle it. If they are new to NetApp or maybe there won't be a dedicated storage admin, I will typically set them up with 2x + delta.
If they are more advanced and/or they are more willing to spend the time, then we will look into thin provisioning.
The kicker here has been dedupe. Not everyone wants to thin provision, but everyone wants to dedupe. How are you going to dedupe a lun if you don't thin provision? That is a whole other topic for another day.
2009-09-30 09:24 AM
You are absolutely correct, much of the Thin Provisioning question of whether or not the customer can handle it. Like I said in my earlier post the key to thin provisioning isn't the setup or the technology itself but monitoring. If the customer has monitors space for their applications as well as the ongoing rate of change for Snapshots then thin provisioning can be a great option. On the flip side of that coin, if the customer is the type that does not monitor, then thin provision is probably not the best option.
Also, let's not get Thin Provisioning and 2x+delta confused, as they are two different things. Thin provisioning is basically presenting more storage to the server than is actually physically present on the backing storage. When the environment begins to approach the point where additional capacity is required then more storage must be added.
2x+delta is just a method of calculating the total volume space is needed. 2x+delta is still a bunch of extra space. Now if the customer has no real idea what any of the sizing information is... doesn't have mailbox limits, or doesn't really know what the average mailbox size is, and doesn't know how many snapshots they want to keep online. Then 2x+delta is probably the answer for them. But using that methodology for a customer that does know that information is probably just adding extra capacity to the volume.
The next item on the list is dedupe. I wold completely agree, it's a popular item and everyone wants to dedupe their data. The only thing I would caution is that certain application data types lend themselves better to dedupe than others. For instance dedupe of SharePoint will usually yield better returns that dedupe of Exchange data.
2013-07-03 01:56 PM
I know this thread is quite old now, but the topics are still very relevant. I wanted to share an example of approximating an Exchange environment. The Exchange admin did not know his change rates, so we followed the advice found in this article to come up with some approximations.
I attached a screenshot of this client's Exchange environment for the past 60 days reflected as output from the above article. With the average messages sent and received per day as well as the average size of these messages, we can estimate daily, weekly, and monthly change rates, just what you'd need to size your environment.
I hope this helps a bit.