ONTAP Discussions

What technologies have you tried to optimize capacity in your environments?

wian
3,207 Views

Wanted to pose a question to our community members.

What technologies, like dedupe, compression, etc, are you using to optimize capacity in your current environment, and what results are you seeing as a result?

Open to the floor, reply to this thread and let us know!

3 REPLIES 3

chriskranz
3,207 Views

We tend to use a little combination on our installs now. I think some are more difficult to track than others, but I'll list them out...

  • NetApp deduplication of relevant data. We're still a little scared of applying it to Tier 1 databases due to the fact that the LUNs need to be thin provisioned to realise the space savings benefit, however it's pretty much a default rule for many other types of data (VMware, CIFS, etc.)
  • VMware Thin Clones in View environment. I know we have the RCU for this also, but the View integration of thin clones gives a greater management benefit, so it's a single pane of glass for admins to manage the infrastructure. An RCU plugin for VMware View would be very cool!
  • 0% Fractional Reservation. I think you'd be surprised if I wasn't doing this! But with auto-grow and snap-delete, it's a bit of a no brainer. Setup Operations Manager to give me some alerting, and this is an awesome feature set that pretty much administers itself
  • SnapMirror. It's odd thinking of this as an optimisation technology, but it really is. Replicating changed blocks only, replicating deduplicated blocks only, inline replication data compression. This stuff I just take for granted sometimes!
  • Optimised VMware templates. "Next -> Next -> Finish" is not an adequate installation process anymore. I need to ensure that my virtual machines are optimised for a specific workload. In VMware I'll spend twice as long optimising a template as I do in actually installing Windows. This is important as it gives me the highest consolidation ratios and the best application optimisation. A virtual machine is an appliance and it has a very specific job, it needs to be built as optimised for that environment as possible.
  • Data Tiering, archiving, file blocking. We actually do this less often, but knowing your data is quite important, and optimising the placement of this on the storage system is important not only to get the best performance for the system, but also to get the best ROI.

I have my cautions and reservations around compression. Deduplication is simply changing the pointers, so there is no additional processing required to support this and in fact it has the side effect of holding data blocks in cache for longer as they are referenced by more pointers! However compression requires an IO overhead in order to decompress it, so I would be very nervous of using this on any sort of production data.

There are many benefits to optimisations, some are quite obvious, better ROI, better performance, better utilisation and consolidation ratios. But some are a little more hidden, improved DR and failover RPO/RTO, improved IT agility (I can deploy a fully optimised VM on optimised storage in minutes rather than days), improved data flexibility (I can clone my data in 20 different ways for various uses), quicker time-to-market for business applications and initiatives. Most importantly as an admin, I get piece of mind that not only is my data thoroughly optimised, but it's also better protected than it was previously! Win:Win in my book

Those are my thoughts and experiences of the technologies we currently use for our clients.

scottgelb
3,207 Views

Almost every customer of ours has dedupe running and the storage efficiency reports are all great stories on space savings.  The VMware story with dedup is a huge differentiator.  We also have some running great dedup rates on their databases.

FlexClone is one of the most signficiant capacity optimization tools and the efficiency reports show many customers with 30x their actual storage with this functionality.  It is a huge differentiator along with dedup.

Compression along with dedup for CIFS data has some interest, but one of the things holding customers back is that it only works on 64-bit aggregates, so unless they migrate or start on a new volume, it isn't an option...but we expect more adoption for home directories.

reinoud7
3,207 Views

Hi Ian,

I think, just as the most customers, we have tried almost everything, expected the compression. We'll test this the next weeks and months.

The most important one (and everybody forget this always) is snapshots. We take enormous amount of snapshots on almost all our volumes. We keep them most of the time 4 months (combination of hourly's, daily's and weekly's). This is the basis for our backup.

From the same family, flexclones. We have at least 12 copies of our complete clinical database (more then 2 TB) for several purposes, but it cost almost no extra disk space.

Thin provisioning is also one that you very easy forget, but use almost everywhere. We only monitor the free space in an aggregate and less on the volume level. This gives us not only savings for capacity but also savings in management.

Dedup is used in our environment on a lot of volumes. The success rate depends on the data, but for VMWare and VDI it works very well and we see figures between 50 and 75%. But also for office data, dumps, invoices, some medical data we got very nice results.

But we have a lot of pictures, and there it doesn't work and I'm sure that also compression will not work for this kind of data.

Best Regards,

Reinoud

Public