ONTAP Discussions

Running compression on existing data for additional storage


I have a ONTAP 8.1.2 device running in 7-mode. I have 3.2 TB of CIFS data with deuplication savings of 720 GB. I suspect I will get more with compression. I have already enabled inline compression, but need to compress existing data.


My understanding of the steps is as follows:


1) Delete all snaphots 

2) Turn off active snapshots and nightly deduplication tasks

3) Run compression command: volume efficiency start -vserver vs1 -volume DataVol1 -scan-old-data true

4) Track progress of compression with this command: volume DataVol1 -fields progress

5) Once it is complete, I can re-enable the Snapshot and deduplication tasks


So, besides the risk of not be able to access old snapshot data, does this procedure look fine?



It appears the commands I got from the data compression and storage efficiency guide won't work as volume is not available in my version of ONTAP.


Is there a way to do this with 8.1.2 7-mode?


I Think the commnads you are using are of c-mode. Please check once , because you had specified 7-mode. In 7-mode there is no concept of v-server. Once check the command it might work.


TR-3958 has information for storage efficiency with 7-mode that may be interesting to you.  The documentation for how to enable/disable compression for 8.1.2 7-mode is here.



If this post resolved your issue, please help others by selecting ACCEPT AS SOLUTION or adding a KUDO.


Really Interested to hear how you may have got on with this.


We're just about to start something very  similar , now on 8.2.4P6 7-mode, on existing deduped vols , although we've never used compression ...in fact believed in inadvisable  on 8.1.

On FAS 3240 HA pairs. TWO Cifs volumes on node02, low base CPU c. 10%. 


So planning to turn on Post processing compression , not inline, using existing dedupe schedule...just on new data to begin with...


However, if we were to plan compression of each existing  volume (3.5TB), the prospect of losing 13 weekly, 72 daily and 6 hourly worth of snaps  is not appealing on these user CIFS vols.

Which is 15% full reserve....is not appealing as they get used for restores loads.


Wonder what happens if you proceed to compress leaving all snaps in situ..... or even reduce the set to e.g 8 weeklies ??


So wondering if you deleted all your snaps and went ahead with compressing exising data ...






i think that the compression rate you get on CIFS volume when its fixed block size (not adaptive as avail in Cdot) and without compaction (again only in Cdot) is not great. i see it giving only 16% in my environment before compaction.  (i wish to move mine to adpative but don't have much time for that...)


*>aggr show-efficiency -fields volume-compression-saved,volume-physical-used
volume-physical-used volume-compression-saved
-------------------- ------------------------
19.89TB              3.66TB
15.31TB              3.74TB





from the TR 3958. you can compress snapshot data as well. it's just comes with a penalty that until the snapshot deleted you end up with the two copies of the block. the compressed one and the non-compress one. if you have spare space - you can do it (but my personal advise that if you do have spare space, don't do it  😄 - and just leave it uncompressed until it ends up on Cdot with adaptive and compaction features)





Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK


Hi GidonMarcus,


yes we saw something similar in a test recently...we create a new 200g vol and added  50g of data from our home drives, mashup of small xp files, office stuff , pst, the usual guff.


Dedupe was 1% and compression 12%. 

We were disappointed ...the figures for file services in Netapp docs suggest a more dramatic increase...our real volumes are showing 30% dedupe so perhaps thats a good indicator of a better compression rate too. 

Probably worth implementing PP anyway ...took note of your comments on whether to do the whole volume if one needs to hang on to snaps. 


Appreciate your response. John