ONTAP Discussions
ONTAP Discussions
Wanted to pose a question to our community members.
What technologies, like dedupe, compression, etc, are you using to optimize capacity in your current environment, and what results are you seeing as a result?
Open to the floor, reply to this thread and let us know!
We tend to use a little combination on our installs now. I think some are more difficult to track than others, but I'll list them out...
I have my cautions and reservations around compression. Deduplication is simply changing the pointers, so there is no additional processing required to support this and in fact it has the side effect of holding data blocks in cache for longer as they are referenced by more pointers! However compression requires an IO overhead in order to decompress it, so I would be very nervous of using this on any sort of production data.
There are many benefits to optimisations, some are quite obvious, better ROI, better performance, better utilisation and consolidation ratios. But some are a little more hidden, improved DR and failover RPO/RTO, improved IT agility (I can deploy a fully optimised VM on optimised storage in minutes rather than days), improved data flexibility (I can clone my data in 20 different ways for various uses), quicker time-to-market for business applications and initiatives. Most importantly as an admin, I get piece of mind that not only is my data thoroughly optimised, but it's also better protected than it was previously! Win:Win in my book
Those are my thoughts and experiences of the technologies we currently use for our clients.
Almost every customer of ours has dedupe running and the storage efficiency reports are all great stories on space savings. The VMware story with dedup is a huge differentiator. We also have some running great dedup rates on their databases.
FlexClone is one of the most signficiant capacity optimization tools and the efficiency reports show many customers with 30x their actual storage with this functionality. It is a huge differentiator along with dedup.
Compression along with dedup for CIFS data has some interest, but one of the things holding customers back is that it only works on 64-bit aggregates, so unless they migrate or start on a new volume, it isn't an option...but we expect more adoption for home directories.
Hi Ian,
I think, just as the most customers, we have tried almost everything, expected the compression. We'll test this the next weeks and months.
The most important one (and everybody forget this always) is snapshots. We take enormous amount of snapshots on almost all our volumes. We keep them most of the time 4 months (combination of hourly's, daily's and weekly's). This is the basis for our backup.
From the same family, flexclones. We have at least 12 copies of our complete clinical database (more then 2 TB) for several purposes, but it cost almost no extra disk space.
Thin provisioning is also one that you very easy forget, but use almost everywhere. We only monitor the free space in an aggregate and less on the volume level. This gives us not only savings for capacity but also savings in management.
Dedup is used in our environment on a lot of volumes. The success rate depends on the data, but for VMWare and VDI it works very well and we see figures between 50 and 75%. But also for office data, dumps, invoices, some medical data we got very nice results.
But we have a lot of pictures, and there it doesn't work and I'm sure that also compression will not work for this kind of data.
Best Regards,
Reinoud