Data Backup and Recovery

Using XDP snapmirror replace tape solution backup in HFC environment

River
2,198 Views

Current  Environment :  HFC (High File Count NAS Environment ) , 10 node Cluster  Ontap9.1 , NDMP ,  dozens of LTO Tape Drives. 

What can we do before : 

1.  Production Cluster hourly snapmirror update to  DR Cluster  ( DP snapmirror )

2. Monthly Full NDMP Backup for every Volume ;  Daily NDMP Incremental  backup every  3~5 days.  

3. Source Volume take snapshot every 12hr and keep 4 copies. 

 

What we can do now : 

1.  Keep 1hr RPO ( hourly XDP snapmirror  update ) 

2.  Keep 30 Daily snapshot  at DR Cluster   as Daily Backup

3. Source Volume take snapshot every 12hr and keep 4 copies. 

4. Only Full NDMP backup using Tape Drive

 

My procedure as  below : 

1. create  new snapmirror  policy  name "Vault_Daily"  as  tpye "mirror-vault"  

 

2.  add-rule to policy "Vault_Daily" for Destination Local take snapshot daily 

snapmirror policy add-rule -vserver svm1 -policy Vault_Daily -schedule daily  -snapmirror-label N_Daily -keep 30

xdp1.jpg

PS :  This rule is used to create Daily snapshot ONLY at destination cluster Local only ( not update and transfer from source )

 

Then we can see destination volume have daily snapshot named as N_Daily...

xdp2.jpg

 

3. 

For new volume create XDP snapmirror relationship and using  policy "Vault_Daily" then initialize. 

For  original DP relateionship volumes, you can take links below to convert DP to XDP snapmirror .

http://docs.netapp.com/ontap-9/index.jsp?topic=%2Fcom.netapp.doc.pow-dap%2FGUID-495CB590-1566-4BE4-B212-B4B9A0224AC9.html

PS: make sure all destination volume is  thin-provition and snapshot autodelete is disable. 

 

4.  Enable Volume Efficiency at destination volume for space saving to keep more snapshot copy

volume efficiency modify -compression true -inline-compression true -data-compaction true  volumeXX

 

5. start volume efficiency scan 

volume efficiency start -dedupe true -compression true -compaction true -scan-old-data true -b true volumeXX

 

note1 :  -scan-old-data is needed for getting  Full saving rate;  -b true  for compress snapshot data

note2 :  (Limitation) the concurrent efficiency jobs is 8 per-node,  so it took me  several weeks for  "-scan-old-data" efficiency jobs for all destination volumes.

PS : in my experience, it scans about 2TB~3TB data every 24hr when scan-old-data. That means  if the volume size is 10TB, it may take about 5 days for volume efficiency start with scan-old-data. 

note3 : For XDP destination volume that enable volume efficiency,  efficiency job can not scheduled, efficiency job will auto start after every snapmirror update.

note4 :  During  efficiency start as scan-old-data job running, all snapshots created is huge and have no saving, please keep monitor the destination volume space used during scan-old-data.  You need to wait for those "giant snapshot" being rorated for release the space used by "no saving giant snapshot". 

 

 

Then  a month after, I  am really  amazing.    

The total volume size of source volumes is around 365388GB.   But the total physical size used in Destination Cluster is 301660( include 1hr RPO copy and 30 daily snapshot copies).   

I got greate saving rate and really amazing that  destination volume with 30 daily snapshot take less space than source volume.


xdp3.jpg

 

Using XDP snapmirror can meet DR/Backup requirement  and working much better than "Synthetic Backup" with any backup software. 

1 REPLY 1

DataOntaper
2,070 Views

Hi River,

 

thanks for the Detailed Post about your Setup, that's Great Savings on your DR Site! and  Yes I can understand Space Usage can become very confusing  especially when your DR Destination with more snapshots takes less space then your Production with less snapshots. In general only a DP Mirror will copy a exact block based Copy to the Destination and therefor its hard to compare a source with a XDP Destination as they are not Block related but the logical Data is related.

 

Please consider:

> you have 2 different keep patterns which can significantly alter data grow rate

-Source  4x 12h Snapshots + Dayly Snapmirror

-Destination: 30x 24h Snapshots

> your Destination is using on Top of Deduplication, Compression which can significantly boost up Space Savings depending on the Sour Data.

 

please let us know if you have more questions on this.

 

Detailed information's about Dedupe and Compression can be found in our Technical Report

TR-4476 NetApp Data Compression, Deduplication,and Data Compaction

 

 

 

 

 

 

 

Public