As there's a new sharepoint application where the users are producing a lot of data (they're scanning about three gigs of documents every day into the db), the snapinfo directory is growing endlessly.
We keep daily snapshots for 60 days, so we also got about three gigs of transaction log backups (*.trb files) in the snapinfo directory for each day.
The database uses 40gb (+ca 12gb in snapshots), it's logs 3gb (+ca. 85gb in the snapshots), and the snapinfo directory 90gb (+ ca. 30GB in the snapshots). This database is productive since 30 days, so we have to expect that the used space in the snapinfo + the snapshots will double until the first snapshots is beeing deleted. Is this a normal behavour? I there something we can optimize? ~220GB to backup a 40GB database is quite unefficient for snapshot-technology, or am I wrong?
SnapInfo folder of SMSQL stores the TRB files as you've noted. In your case since the TRB files are large and the change rate is large, the SnapShots of the SnapInfo folder are consequently consuming a fair amount of space. This is expected behavior since snapshot size is directly proportional to change rate and in your case you are pretty much adding new contents to the TLOG file everyday. In your case what I suggest is that you keep only one nightly backup of the Content DB (taken via SMMOSS) and delete older backups untill such time that the document upload phase is completed. After that you can fall back to the 60-day retention period you have since at that point the change rate would be much lower. I would love to have a further look at your environment and suggest more strategies for space optimization.
Thank you for the response. You gave me some useful information for our future planing...
First of all, that's just one sharepoint project, sadly our executive board is planning to migrate much more to sharepoint. It's hard to say how much data we're talking about, because we're centralising a lot of systems at the moment. Some people here talk about 40tb at the end. I'm not sure I they know what that means for storage and backup.
Back to the space used in snapinfo. Limiting the backup to a nightly snapshot isn't a solution for us, as the scanning process won't stop in the next years and we have to guarantee restores up to two months.
As far as I know, transaction log backups are needed for up-to-the-minute restores, but not for the restore with the smmoss-gui. It's important for us to have the ability to restore up-to-the minute (pe. on lun errors, etc) but we don't need it as long as the ability to restore single items with the smmoss-gui.
Is there a way to configure some smmoss backups without the up-to-the-minute option?
Unfortunately, there is no way to configure smmoss backups without the up-to-the-minute option. The up-to-the-minute option is inbuilt into the command that is sent to SMSQL. However, if you are using SMMOSS 2.0, you could look at archiving older backups and thus keeping, fewer online backups.
If you delete the TLOGs then SMMOSS restore will fail because the restore operation will expect the TLOG backup to be present.
SMMOSS 2.0 can restore at the item-level from archived backups (snapshots) too. Please refer the "Restore from alternate storage location" section on Pg. 92 of the SMMOSS 2.0 Installation and Administration Guide.
As a result of your post we recently upgraded to SMMOSS 2.0.
But since then, we've got a problem that the snap manager generates a huge local index on the sql-server. Each job geneartes it's own directory in the VaultClient\jobs-Directory on the server and writes about 130mb of data. This behaviour is strange as I have configured a cifs-share for this on "Device Manager" in the snapmanager web interface.
What am I doing wrong?
Here you see the content of the directories, all generated by the same job:
We need to get rid of the data stored on the sharepoint-sql-server as this server is a thin provisioned, with snapshots backuped vm, where the huge overwrite rate generates large snapshots. A full system drive does not encourage the stability of the virtual machine.