note1 : -scan-old-data is needed for getting Full saving rate; -b true for compress snapshot data
note2 : (Limitation) the concurrent efficiency jobs is 8 per-node, so it took me several weeks for "-scan-old-data" efficiency jobs for all destination volumes.
PS : in my experience, it scans about 2TB~3TB data every 24hr when scan-old-data. That means if the volume size is 10TB, it may take about 5 days for volume efficiency start with scan-old-data.
note3 : For XDP destination volume that enable volume efficiency, efficiency job can not scheduled, efficiency job will auto start after every snapmirror update.
note4 : During efficiency start as scan-old-data job running, all snapshots created is huge and have no saving, please keep monitor the destination volume space used during scan-old-data. You need to wait for those "giant snapshot" being rorated for release the space used by "no saving giant snapshot".
Then a month after, I am really amazing.
The total volume size of source volumes is around 365388GB. But the total physical size used in Destination Cluster is 301660( include 1hr RPO copy and 30 daily snapshot copies).
I got greate saving rate and really amazing that destination volume with 30 daily snapshot take less space than source volume.
Using XDP snapmirror can meet DR/Backup requirement and working much better than "Synthetic Backup" with any backup software.
thanks for the Detailed Post about your Setup, that's Great Savings on your DR Site! and Yes I can understand Space Usage can become very confusing especially when your DR Destination with more snapshots takes less space then your Production with less snapshots. In general only a DP Mirror will copy a exact block based Copy to the Destination and therefor its hard to compare a source with a XDP Destination as they are not Block related but the logical Data is related.
> you have 2 different keep patterns which can significantly alter data grow rate
-Source 4x 12h Snapshots + Dayly Snapmirror
-Destination: 30x 24h Snapshots
> your Destination is using on Top of Deduplication, Compression which can significantly boost up Space Savings depending on the Sour Data.
please let us know if you have more questions on this.
Detailed information's about Dedupe and Compression can be found in our Technical Report