ONTAP Discussions

What Database is causing my snapshots to be so large?

israelmmi
7,673 Views

I know you can get general database trending & size tools from SQL Server - but I was wondering if there was any way from within SMSQL to see the particular sizes of each database backed up. For example, we suddenly get a snapshot backup which is over 30GB in size and are wondering who is causing that to happen?

Thanks!

Reuv

16 REPLIES 16

fjohn
7,611 Views

You could look at the portion of space on a LUN that's occupied.   There's a sample script using the Data ONTAP PowerShell Toolkit here: http://communities.netapp.com/docs/DOC-6349

J

radek_kubka
7,611 Views

Hi,

This is fine providing each database has its own LUN. If multiple databases share a LUN, there is no way (as far as I know) to find from the filer side which one caused a snapshot to grow.

Typical suspects though are database maintenance tasks (like defragmentation).

Regards,
Radek

israelmmi
7,611 Views

Thinking about this problem again, I thought to monitor the growth of my data files on a daily basis and compare that metric to the size of my snapshots. Then looking at my data I see that my data files don't grow all the time, but the data within the data files changes (like within a log file that is overwriting previous data). That the snapshot will keep track of, all the while the data file itself is the same size. That correct?

thomas_glodde
7,613 Views

Yes, thats correct. A database may stay at eg. 100GB filesize, but withing its file, its gonna delete and/or overwrite blocks which will turn into your snapshot change rate.

thomas_glodde
7,611 Views

In 9 out of 10 cases, its the Microsoft SharePoint Database causing abnormal rates.

fjohn
7,611 Views

When I create a volume, it's a logical constuct and internally only consumes space for a small amount of metadata.  If I set the volume guarantee to "volume", then the full space of the volume is "reserved" from the space in the aggregate.  Likewise, if I have a LUN in a volume, even though the LUN may be 500GB, if inly 12GB contains blocks that were written to, then it's actually only using 12GB.  It appears to take 500GB from the volume because the space for the LUN is reserved by default.  It's pretty easy to see this in action with System Manager.  Highlight the LUN and then select "edit"  remove the checkbox for "space reserved" and click apply.  You can edit it again and set the LUN back to space reserved without harming anything.

Now, as far as monitoring the space used inside the LUN, that process probably won't work very well.  You can however, easily write a script using the Data ONTAP PowerShell Toolkit to mind the "occupied" size of the LUN in the active filesystem using the get-nalunoccupied size cmdlet.   In fact, you can use the toolkit to learn quite a bit about what's going on inside your LUNs.  Clear-nalunstatistics and get-nalunstatistics in particular would give you a very good idea of what write activity is happening on each LUN.

You can find the toolkit here:  http://communities.netapp.com/community/interfaces_and_tools/data_ontap_powershell_toolkit

J

BrendonHiggins
7,611 Views

Think you are going to struggle with SMSQL to know what size the database is.  If you look in the backup log files created by SMSQL c:\program files\SMSQL_SnapInfo\Report\Backup\ {may be somewhere else} you can see the information SMSQL knows about as part of the job.  I could not see anything about database sizes here.

SMSQL allows you to run pre and post scripts and these could be used to get the database size from either SQL http://www.sqlteam.com/forums/topic.asp?TOPIC_ID=103587 or ask the Windows file system on the host.  Something like this http://www.tech-archive.net/Archive/Scripting/microsoft.public.scripting.vbscript/2009-03/msg00039.html.

The links are just the first google hits I found and have not used them, but you can see the idea.  However if you are using scripts why bother running them from SMSQL...

Hope it helps

Bren

PS

As SQL requires LUNs the filer will only see the data as a single LUN file and how much space it uses total so you must work from the host side.  If you want to see how much change is happening in the databases you could get the size of the TLs from the snapinfo drive.  This will give a good approximation for each database.  (As long as you are not using simple mode)

israelmmi
7,611 Views

Unfortunately not all of our databases are running in simple mode, and the log files generated will not tell you about the data LUN growing. So we are kind of stuck then not having a clue about who/what is causing these large spikes in snapshot growth.

radek_kubka
7,611 Views

Hi,

As per my original post - the most likely scenario is one (or more) of your databases / hosts runs a regular maintenance task, like defragmentation. This will not cause the data to grow, but will inflate existing snapshots.

Can't you simply ask your DBAs which one of them runs DB / OS defrag on a regular basis?

Regards,
Radek

israelmmi
6,678 Views

Truth is I forgot about that post . . . 

But on second thought, since this is a Windows 2008 R2 server, there is the built-in defrag task which runs every Wed at 1AM. I guess that would explain growth when I run SMSQL on Wednesday evening, but not on the other days. Other than that, we don't run defrag here.

But then again, editing my comment after looking more carefully, I see that the default defrag task's schedule is disabled, so that isn't the cause. And what is bothering me very much, for example, is I see that last Thursday I had a snapshot created by SMSQL which was 41GB and I have no idea what caused it.

Now I know that you could say the same thing for every product, whether just a plain old file share where users copied/changed/deleted a bunch of data, or even with an Exchange database. Just saying that it would be nice (if even possible) to be able to explain such outstanding snapshot growth.

radek_kubka
6,678 Views

this is a Windows 2008 R2 server, there is the built-in defrag task which runs every Wed at 1AM. I guess that would explain growth when I run SMSQL on Wednesday evening, but not on the other days.

Well, we are almost there, but not exactly. I mean running a defrag can (& usually does) inflate existing snapshot(s), but shouldn't impact future snapshots.

bora
6,678 Views

Is there any re-indexing of the SQL databases that is occuring.  Re-indexing SQL databases will cause the next snapshot that is taken of the databases to be much larger than they normally would be.

Cheers,

Bora

israelmmi
6,678 Views

We onyl reindex once a month or so.

israelmmi
6,678 Views

Bredon,

Thinking about your suggestion again, do you know if there is a log saved (Except for the SMSQL one) which lists the databases backed up and the size of the Tx Log (for a Full Recovery Model DB) backed up. I know I could just search the folders and sort by sizes but it is a little laborous (each time do do the same thing) and would be much nicer if we could automate it.

BrendonHiggins
6,678 Views

It is not something I have played with yet but if you need to know I would recommend looking at these web pages.

MS VSS

http://technet.microsoft.com/en-us/library/cc785914%28WS.10%29.aspx

MS SQL Backup via VSS

http://msdn.microsoft.com/en-us/library/cc966520.aspx

Gets deep very fast...

Hope it helps

Bren

israelmmi
5,144 Views

Deep . . .  I was already lost at the first word . . .

I wrote this PowerShell code, which may do what I need. Right now I only have a few select DBs we are backing up with SMSQL although we plan to backup many more shortly. Again, this solution will only work in a particular scenario (Tx Logs only, and only if Full Recovery Model is used).

# Select all the backups which were taken since yesterday
$Date = [DateTime]::Now.AddDays(-1)
$Path = "SnapInfo Directory where the log backups ar stored"

# $_.Mode -ne "d----" - only show files, but no directories
$Files = Get-ChildItem –Recurse -Force $Path | Where {$_.lastwritetime -gt $Date -and $_.Mode -ne "d----"} | Sort Length -Desc

# Write header output
""
"{0,-50} {1,8}" -f "File Name","Size (MB)"
"{0,-50} {1,8}" -f "---------","---------"

# Write the file name and size in MB
foreach ($File in $Files) {
    "{0,-50} {1,8}" -f $File.Name, ("{0:N2}" -f ($File.Length / 1MB))
}

To sum up what we have learned so far - a snapshot can be large because:

  • Database file (MDF, NDF) growing
    • I can catch this by monitoring the physical file size
  • Transaction Log Growing
    • Full Recovery Mode:
      • Check the size of the Tx log stored in SnapInfo, or
      • Check the physical size of the Tx Log file
    • Simple Mode:
      • Check the physical size of the Tx Log file

        • If this is after a Full Backup and your Tx Log is now overwriting existing data without changing the file size, you're flat outa luck!
  • Rebuild Indexes or Defragmentation
    • You just need to know when these tasks run and expect the worst
  • DeDup
    • This wasn't suggested, but in the past we have found that all DeDup really does is create a BIG FAT snapshot (since it modifies all the blocks)

Anyone have any other ideas?

Thanks!

Public