ONTAP Discussions

theory and practice..some statistics-Asis gain after FULL Backup ?

romainvigouroux
4,610 Views

Hi all,

we plan to implement some backup to disk via BackupExec.

Can anyone could tell me about potential gains if we plan to use dedup on the volume used to store 2 FULL backup ?

2 FULL backups would mean a half , 20 % percent or less or more gain ?

I know this question is not easy to respond because of the data type backuped but maybe could tell me about his experience for let's say those 2 types of data :

- Microsoft Exchange full backup

- Office Data such as word or excel files..

Thanks a lot

Romain

6 REPLIES 6

romainvigouroux
4,610 Views

Hello all,

I think I found some interesting information via http://www.dedupecalc.com/ and the online calculator.

However if any one could post some real world experience stats , it would be a bonus .

Cheers

Romain

localizedinsanity
4,610 Views

Having just done this (and still in tuning), I can say that 2 Full backups will not dedupe completely. It took 3 Full backups to start seeing dedupe with BackupExec. I have still not figured out why, either in the lab or the customer's site. That said, full backups after the 2nd dedupe according to the normal rules of change rate. I'll be back at the site Tuesday and will post the actual df -sh outputs so you can see what we are getting on OS backups, Exchange, & file shares.

romainvigouroux
4,610 Views

Hi Steve,

thanks a lot for your posts..

Cheers

Romain

localizedinsanity
4,610 Views

As promised, here is the real world experience using BackupExec with 1 and 3 month retention times. You may need to format it to read it easily.

Daily change rates are not incuded because of high variability. Each listed volume protects a unique set of data. For the volume setup I used 1TB thin provisioned volumes so that space could be best utilized by the aggregate. We have about 14TB of volumes provisioned on 10TB of disk and let the system manage it's own usage. There are other volumes that don't use dedupe because the single pass full turned out to be too large to run 3 copies in before dedupe kicked in. I don't anticipate we'll hit the roughly 19TB deduped data limit per volume keeping only 1-3 months of backups, but it's something to check into before committing to a larger environment.

df -sh
Filesystem used saved %saved
/vol/os_partitions/ 795GB 1762GB 69% 270GB of data - weekly full + daily diff (retain 1 month)
/vol/exchange_data/ 254GB 1828GB 88% 63GB of data - daily full with hourly diff (retain 1 month) + monthly full (retain 3 months)
/vol/ad_grt_data/ 49GB 301GB 86% LUN for Granular Recovery (CIFS doesn't support NTFS sparse files) - 13 GB of data - daily full (retain 1 month)
/vol/ccomsrv_data/ 459GB 490GB 52% 200GB of data - weekly full with daily diff (retain 1 month)
/vol/data_drives_monthly/ 517GB 436GB 46% 230GB of data - monthly full with weekly diff (retain 3 months)
/vol/data_drives_weekly/ 86GB 106GB 55% 42GB of data - weeky full with daily diff (retain 1 month)

df -h
Filesystem total used avail capacity
/vol/os_partitions/ 1024GB 795GB 228GB 78%
/vol/exchange_data/ 1024GB 254GB 769GB 25%
/vol/ad_grt_data/ 1024GB 49GB 974GB 5%
/vol/ccomsrv_data/ 1024GB 459GB 564GB 45%
/vol/data_drives_monthly/ 1024GB 517GB 506GB 51%
/vol/data_drives_weekly/ 1024GB 86GB 937GB 8%

romainvigouroux
4,610 Views

Hi Steven,

Thanks for your feedback and precious informations.

That sounds really good !!

Cheers

ROmain

nsl_com_sg
4,610 Views

Dear all,

          I am new to NetApp as well as the forum and my English is not native hence let me appology if any error in spelling.

          We just bought a FAS2050 with 1.95TB volume created for B2D backup destination for our Backup Exec 12.5. A LUN is created of this volume with thin provision of 16TB and mounted to BE server via ISCSI. The LUN is created with the following way

1) LUN Space Reservation value = Off

3) Volume Guarantee = volume

4) Snap Reserve = 0%

      Backup file type are Exch, MSSQL and Files (MSoffice, PDF and autocad). We have done a estimation with current full backup size and daily file change, it should be able to achieve 30 days. However the volume is full within 14 days. I believe they may be way to improve with best practice or some recommendation.

1. Can someone who are in the same environment advise me with best practice or recommendation? (volume configuration and BackupExec)

2. Before deduplication kicked in, backupexec software compression helps in some space saving on our storage. Shall we turn off now for best dedupe result?

3. Any configuration we have to take note in Maximum backup-to-disk file size?  E.g. 200GB is better than 50GB

thanks

Public