Subscribe

LUN Reserves

I'd like some opinions on the below, TIA:

Now every storage vendor has limitations.  So lets keep this all in perspective shall we, everything is relative.

When you create a LUN using Netapp storage, you have the option to enable space reservations on the LUN. This is of course to ensure writes to the LUN should the volume ever fill up due to snapshots. So the general consensus is to set your volume size to 2.2-2.5 times the size of your desired LUN. So lets say you wish to create a LUN 100GB in size. To be on the safe side you would need to create a volume that is 250GB in size. Now only 100GB of that is actually usable space to the host.


*     *     *
Now here is a moment where we get to keep everything relative. Ask 5 different people what “usable space” is and you will get 5 different answers. You are, “using” the space. You have a space efficient, near instantaneous backup and restore points. One could also say that because of this feature, it is “useful” to the host as well. Well just for this case lets say that it is not.
*     *     *
So I have now ensured writes to my LUN BUT now have 150GB of space I can not use. So what is the solution? Disable space reservations on the LUN. In consequence what have a I lost? I can still take snapshots but if my snapreserve fills up the writes will fail and the LUN will go offline. So we can combat this by enabling snap-autodelete. Well that’s fine and dandy but lets consider a worst case scenario for a moment. Lets say I have a host connected to this non-space reserved LUN. Host gets a virus and observes 100% changes in my LUN. I am unaware of this virus and have had it going on for a week while I was on vacation. Snapshot schedule comes around and takes a snap. Oh we had a 100% change, not enough room for a snapshot, lets drop one. This happens to be the last non-virus infected snapshot. I get back from vacation and discover my host is hosed. So like any regular Netapp admin I go and create LUN clone or use flexclone to clone the LUNs from the snapshots to decide which one I want. I discover NO GOOD SNAPSHOT!!! Not good.

So now lets say we have the same scenario but this time I do not have snap-autodelete turned on. Well the writes just fail and the LUN goes offline. Everyone in the office hates me but at least I am able to get back to working state.

Another option could be to autogrow the volume. However, now you have something you need to manage much more closely. If using de-dupe you have to ensure at least 3% of available space in the aggregate.

Which do you choose?

Another thing to think about, is that this scenario of keeping snapshots, choosing to keep volumes online or using snap-autodelete is not just true of LUNs. As a matter of fact it’s a law of physics. This same scenario applies to CIFS and NFS as well. To truly be able to protect your data in an online fashion such as snapshots you must reserve 100% of space somewhere in the volume.

P.S. I am in no way saying that Netapp storage is bad. Far from it actually. I LOVE NETAPP!!! If you could guarantee me that they will never go out of business or do ANYTHING unethical I would get their logo tattooed on me Polynesian style! Some other storage vendors don’t even let you disable space reservations for LUNs. To me Netapp is the most flexible and agile storage vendor out there. What I want are strategies and best practices. Ways to tell my customer, “I know you are not happy about the space reserves but this is why you need to do it, or these are the risks you take if you don’t, this is how the Netapp experts are recommending you do it, etc.”

Re: LUN Reserves

Hi Garry,

Many thanks for bringing that topic (back) to the table

I do not have any firm recommendations - what I do have are doubts, concerns & just some thoughts.

With regards to auto-grow vs. snap auto-delete: in my opinion the former should be set to kick in first, then the latter. In your "nasty" scenario of the virus spread the only answer would be, that in a large environment you would have many LUNs in many volumes with some (decent) spare capacity on an aggregate level. Hence, statistically, you should be fine if, say, LUNs within just one (or few) volumes go wild.

My big concern though is about reallocate command, which in some environments is run quite regularily - I tried to kick off a discussion about potential impact of this, but without any luck so far:

http://communities.netapp.com/thread/4431?tstart=0

Regards,

Radek

Re: LUN Reserves

I'm in the process of writing up a really detailed answer to this kind of question for the storage efficiency blog. But let me ask you a few questions.

1. Do you absolutely, positively need to keep your snapshots no matter what happens ?

       If the answer is "YES", then the existing 100% fractional reserve is the way to go.

2. Are you happy to lose a snapshot under exceptional circumstances provided the main LUN never goes offline, no matter what happens

      If the answer is yes, then  keep your LUN's space guaranteed, and drop the fractional reserve to some lower number (possibly zero) and use autogrow and autodelete to mange the space used by the snapshots

3. Are you really confident of your LUN space usage and have good ways of monitoring utilisation and good purchasing practices that allow you to purchase your storage as you need it

     If the answer is yes, then you can move to a completely reserveless thin provisioned environment.

For most people, thin provisioning the snapshot reserves makes the most sense. Snapshots are very cool, but if you're running snapvault, or some form of tape backup, then losing one is hardly the end of the world. Autogrow and autodelete in the most recent versions of OnTap makes this safe, and relatively easy to do. For more informaiton pkleas read TR-3483, and TR-3563, epsecially section 6 of TR-3483 entitled"EFFICIENT PROVISIONING WITH SNAPSHOT COPIES"

I'll cover this again in greater "step by step" detail on the storage efficiency blog in the next few weeks.

Regards

John Martin

Re: LUN Reserves

Hi John,

Many thanks for jumping into this discussion.

So, here comes pretty straightforward question: what's your take on reallocate command & its potential impact on snapshot usage, and consequently fractional reserve?

Regards,
Radek

Re: LUN Reserves

Hi,

Add deduplication to the mix. Reallocate and deduplication are contradictory by nature. You cant have both on at the same time.

The fact is that Ontap offer so MANY features that some of them will clash. However I d rather be in a position where I get

to chose what I want to turn on where than not having the option to chose.

Eric

Re: LUN Reserves

It’s a good question, I'll do a little more thinking on this and get back to you. Though having said that the ultimate answer is that it depends on which version of OnTap, with 7.3 reallocate doesn’t consume snapshot space for volumes/aggregates created after 7.2. With pre 7.2 volumes some planning is required.

Regards

John Martin

Consulting Systems Engineer

NetApp Australia Pty. Ltd.

+61 2 9779 5653 Direct

+61 412 313 064 Mobile

John.Martin@netapp.com

www.netapp.com

Re: LUN Reserves

Hi John,

Here is from the Dedupe TR:

Deduplication and Read Allocation

For workloads that perform a mixture of random writes, and large and multiple sequential reads, read reallocation improves the file layout and the sequential read performance.

When you enable read reallocation, Data ONTAP analyses the parts of the file that are read sequentially. If the associated blocks are not already largely contiguous, Data ONTAP updates the file layout by rewriting those blocks to another location on disk. The rewrite improves the file layout, thus improving the sequential read performance. However, read reallocation might result in more storage use if Snapshot copies are used. It might also result in a higher load on the storage system. If you want to enable read reallocation but storage space is a concern, you can enable read reallocation on FlexVol volumes using the space_optimized option. The space_optimized option conserves space but can slow read performance through the Snapshot copies. Therefore, if fast read performance through Snapshot copies is a high priority to you, do not use space_optimized.

A read reallocation scan does not rearrange blocks on disk that are shared between files by deduplication on deduplicated volumes. Since read reallocation does not predictably improve the file layout and the sequential read performance when used on deduplicated volumes, performing read reallocation on deduplicated volumes is not supported. Instead, for files to benefit from read reallocation, they should be stored on volumes that are not enabled for deduplication

Eric: With regards to which Ontap version one is running I believe that is something you have to have in mind only if you have dedupe on and how full you run your aggr. and not necessarily if you run realloc. or not:

FROM TR AGAIN:

When deduplication runs for the first time on a flexible volume with existing data, it scans the blocks in the flexible volume and creates a fingerprint database, which contains a sorted list of all fingerprints for used blocks in the flexible volume.

The total storage used by the deduplication metadata files is approximately 1% to 6% of the total data in the volume. Total data = used space + saved s pace, as reported when using do –s (that is, the size of the data before it is deduplicated). So for 1TB of total data, the metadata overhead would be approximately 10GB to 60GB.

In Data ONTAP 7.2.X, all the deduplication metadata resides in the flexible volume. The deduplication metadata uses 1% to 6% of the logical data size in the volume.

Starting with Data ONTAP 7.3.0, part of the metadata resides in the volume and part of it resides in the aggregate outside the volume.

During an upgrade from Data ONTAP 7.2 to 7.3, the fingerprint and change log files will be moved from the flexible volume to the aggregate level during the next deduplication process following the upgrade.

If you’re running Data ONTAP 7.2.X, leave about 6% extra space inside the volume on which you plan to run deduplication.

If you’re running Data ONTAP 7.3, leave about 2% extra space inside the volume on which you plan to run deduplication, and around 4% extra space outside the volume in the aggregate, for each volume running deduplication.

Sorry this is a bit off-topic but its important to know

Message was edited by: eric barlier

Re: LUN Reserves

Interesting reading.

This relates to something I've been thinking about regarding deduped volumes and reallocates.  (This thread is also related: http://communities.netapp.com/message/12721 )

Given all these factors, what about reallocate when spindles are added?

I have been contemplating adding disks to an existing aggregate that's full of heavily deduplicated data (VMFS datastores) and reallocate seems to be recommended to spread the data across the new disks.

Thoughts about this case?

Thanks,

Dan

Re: LUN Reserves

Daniel,

The post from eric barlier quoting sections of the Dedup DIG pretty much answers the questions you've asked, but I'll add my 2 cents worth.

Reallocation does one of two things

1. Defragments "Free Space" which makes subsequent write operations more effective, this is performed by aggregate level reallocation or realloc -A. This works just fine with dedupe environments and can sometimes improve write performance for the array, however given that you're about to add a big chunk of new unfragmented free space, there isn’t much point.

2. Locates logical blocks in sequential order on the physical disk. This makes sequential reads more effective and is done via a volume or file level reallocate

Now when you add new disks and run a file or volume level reallocation, the easiest and most logical thing to do is to write the files to the new prisitine free space you've just added until it's all used up, and then the write allocator starts using what are hopefully some new largeish areas of unfragmented free space that's been free'd up from moving files/blocks from the old disks. This has the advantage that these spindles can now be used for reads, so you not only get a boosts to sequential read performance but random reads too. YAY !

As a result a good rule of thumb is that when you add new disks, its not an entirely bad idea to run a reallocate.

HOWEVER - in the case of a heavily deduplicated volume, none of the blocks which have been shared will get moved, I personally think the "not supported" tag is a little heavy handed, as there has been lots of testing done to ensure that nothing bad happens. Unfortunately for in many cases, nothing "good" happens either, or at least potentially not a lot of "good" in any case. The non-shared blocks will probably get distributed (or at least they ought to) to the new disks so there will be some benefit.

For example, if a file/LUN has large contiguous shared blocks followed by large contiguous non-shared blocks or vice versa, we may see layout improvement after running vol/file reallocate. But if the file has shared blocks interspersed, it may not help.

My feeling is that if you're not seeing a performance issue then allow the aggregate to rebalance itself naturally over time. If you are seeing a performance issue, open a support call to make sure the problem is addressed holistically. If you're running 7.3.1 or above, consider turning on read-reallocation too as this will speed this process along nicely.

Regards

John Martin

Consulting Systems Engineer

NetApp Australia Pty. Ltd.

+61 2 9779 5653 Direct

+61 412 313 064 Mobile

John.Martin@netapp.com

www.netapp.com

Re: LUN Reserves

Wow, I can't believe my eyes - it was so quiet for so long & now it looks like the discussion (eventually) is taking off!

Coming back to what I raised as a concern around reallocate & fractional reserve:

In a nutshell, to me the key problem is that reallocating of a heavily fragmented volume (which typically would be the most obvious candidate for this operation) can massively inflate space taken by existing snapshots, hence FR set to 0% with safety net of vol auto-size & snap auto-delete may not be good enough.

As already said I tried to encourage people to comment on that in my thread http://communities.netapp.com/thread/4431?tstart=0

My own findings are two fold though:

- physical reallocation (-p option) should be fine to run as (allegedly) it doesn't cause snapshots to grow

- when I tried to run 'normal' reallocation (without -p option, but with -f option) it refuses to run & returns nice error "cannot be started on volumes with existing snapshots" (or something very similar)

Does anyone know whether it has changed recently, i.e. non-physical reallocation won't start if a volume has snapshots?

Regards,
Radek