Excellent. This is exactly what I was looking for.
Just took a random users data (10GB) and tested it in the simultator to see the results.
Its in no way indicative of the real efficiency of the compression algorithm (whatever algorithm is used in compression)
The data is a mixture of TIF-images, Endnote-data, PDF-files, medical simulator data etc etc...
nsim80> df -Vhs /vol/Test
Filesystem used saved %saved
/vol/Test/ 5934MB 3761MB 39%
nsim80> df -VhS /vol/Test
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
/vol/Test/ 5934MB 4868MB 45% 3761MB 39% 1107MB 16%
And another example with other two users data.
nsim80> df -VhS /vol/Test
Filesystem used total-saved %total-saved deduplicated %deduplicated compressed %compressed
/vol/Test/ 9838MB 3425MB 26% 288MB 3% 3136MB 24%