Data Backup and Recovery

How to set Transactionlog Backup retention (SMSQL/SnapInfo)

F_SCHUBERT
10,590 Views

Hello Community.

 

We have a Problem with SnapInfo LUN of SMSQL Server. It continously increases its size. We are constantly running out of space and increase the size of LUN periodically.

Our general retention is 28 days. We do 6 backups a day. However, the SnapInfo directory holds Backups since the beginning of our Backups.

The Snapshot retention of SQL RDMs is working fine.

 

Question:

Where should we set the retention?

 

Environment:

SharePoint 2010, on MSWindows 2008R2-SP1

SMSQL 7.0.1

SDW 7.0.3

SMSP8.1P1

 

A help or Idea is very appreciated 🙂

Frank

1 ACCEPTED SOLUTION

F_SCHUBERT
10,057 Views

I think I found the cause of the problem.

 

SMSP or SMSQL is checking the DFM for Backups. Our DFM did not noticed the deletion of the snapshots on the secondary, so it kept all Transaction logs. 

After I cleaned up the DFM manually with something like "dfm backup delete...." until it was back in sync with the real snapshots the next DRP Job deleted all Transaction logs as supposed to. 

 

A nice side effect was that the retention job duration dropped from 1:30h to 5 min. Also Backup runs shorter. 

View solution in original post

10 REPLIES 10

deepuj
10,524 Views

Hi,

 

Do you see snapshots older than 28 days in your snapinfo? 

 

 

Thanks

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

dmauro
10,518 Views

Hello Frank,

Assuming you have setup a retention rule for your Backup Management groups of max 28 days and you see snapshots which are older than 28 days,

In general what tends to break SnapManager applications is the volume option 'autodelete' which deletes the snapshots in ontap without notifying SnapManager about that, which in turn will cause the whole SnapManager backup set to be incomplete. When SMSQL then looks for older snapshots to delete and cannot find one of the snapshots in the backup set, it will exit in order to preserve the potentially needed remainings of the backup set. (think about accidental deletions when in fact you need either the database snapshot or the log snapshot).

So, what I would do is check that option and disable it in case is enabled (if you are worried about space, enable autogrow rather than autodelete). Autogrow is safer but you need more space in the aggregate.

 

a clean up in SMSQL is needed by using the delete backup wizard as shown in the attachment.

 

if this does not help,

I would log a case with support.

Domenico Di Mauro

 

 

 

F_SCHUBERT
10,348 Views

Thanks for your help.

 

Indeed we are deleting Snapshots via Powershell script if we run out of it, but Snapshots do not really hurt. The problem is the Transaction logs inside the Index LUN.

We did the delete backups older than 28 days (224Snapshots) last time in December, and today Feb 20th.

The oldest Backup where vom sep 9th. pretty strange.

We also run SMSP8.1 P1.

We will open  a case

tom_dewit
10,342 Views

We now have 3 customers with SMSP 8.1P1 having this same problem with SnapInfo retention. Cases are open with Netapp support, but untill now no solution except cleaning up manually.

 

Tom

 

F_SCHUBERT
10,183 Views

Can you describe, how to clean up manually, since these are Transaction Logs I suppose? Od do you just mean to delete snapshots?

tom_dewit
10,181 Views

Manual cleanup can be done on the SnapInfo drive on the SQL server. Delete all the metadata directories for backups older than the retention (watch out for different retention policies like daily and weekly). Also delete all the transaction log backups older than your retention settings for every database. Next you have to check if the there are snapshots of the SnapInfo LUN that are older than your retention and also delete them.

 

This can be a lot of work that has to be repeated every few weeks. A better solution may be to schedule an extra SMSQL job to make a normal SQL backup (and set the retentions the same as in SMSP) because this job will then clean up the transaction logs and snapshots SMSP isn't cleaning up. Yes, this is an extra backup of your SQL databases, but as there are no extra transaction logs to backup, this will be a fairly fast backup if you schedule it a few hours after your SMSP backup. We will only use this job for maintenance cleanup.

 

In the meantime there is still no progress in the Support cases. The cases are assigned to development now. If you have the same issue, please create a Netapp support case to create more visibility on this bug.

 

Grtz,

Tom

 

 

F_SCHUBERT
10,058 Views

I think I found the cause of the problem.

 

SMSP or SMSQL is checking the DFM for Backups. Our DFM did not noticed the deletion of the snapshots on the secondary, so it kept all Transaction logs. 

After I cleaned up the DFM manually with something like "dfm backup delete...." until it was back in sync with the real snapshots the next DRP Job deleted all Transaction logs as supposed to. 

 

A nice side effect was that the retention job duration dropped from 1:30h to 5 min. Also Backup runs shorter. 

tom_dewit
8,277 Views

 

 

https://kb.netapp.com/support/index?page=content&id=3012310&locale=en_US&access=s

 

 

 

 

 

tom_dewit
10,400 Views

Hi Frank,

 

I'd reccommend you to open a case with Netapp Support.

 

We have the same issue at two customers already. The retention is correctly set but SMSP/SMSQL doesn't delete the SnapInfo directories and so SnapInfo keeps growing indefinitely.

 

In our case problems started after upgrading SMSP from 8.0 to 8.1(P1).

 

We have two cases open already and extra cases help Netapp Support understand this is a general issue.

 

Grtz,

Tom

 

 

JennyC
10,384 Views

Hi Tom,

 

SMSQL is responsible for maintaining the proper retention of the SnapInfo folder. You said you noticed the issue when customers upgraded from SMSP 8.0 to 8.1P1. SMSP generates the powershell command that SMSQL then runs so that would be something to check in the new-backup log of the job.

 

Opening a NetApp support case is the right idea as logs will need to be analyzed.

 

-Jenny

Public