Data Backup and Recovery

What do you think of getting a 50% Guarantee?

shea
4,647 Views

In the October issue of Tech OnTap, I wrote a piece about the NetApp guarantee, the program where we guarantee that you will use 50% less storage with NetApp compared to a baseline of traditional storage.

You can check out the article here: http://www.netapp.com/us/communities/tech-ontap/tot-guarantee.html

What do you think of our guarantee program?

5 REPLIES 5

konkle
4,647 Views

Mike

I'm not sure who you want answering this question, but it's a great guarantee. The problem with just talking about results from case studies is that it's always a YMMV scenario. This guarantee program spells out the requirements and delivers results - either in the form product coupons or new customer satisfaction. Paint me red, bust down a wall and say I'm the kool-aid man, but I like it - it's concrete, not fuzzy or slippery

JK

shea
4,647 Views

Duuude - looks just like you!! I dd not know you were the Koolaid guy on the side!

BrendonHiggins
4,647 Views

Not sure what to make of it. We ran A-SIS on a CIFS share which was full (23,000,000+) of Office documents and it only de-duped by 16% {no snapshots}. We are going live with 50 VMware machines in the next 3 months so will see what happens. As an end-user / engineer, it does sound a bit like marketing spin however.

Given how fast we burn through new storage (-10% WAFL, -'Real size', -DP, -frac res, aggregate performance falls off when greater the 85% full, etc) I would rather the guarantee was "Low cost disks and shelves" or better yet, buy one get a free one. (Call it a performance pack, as we know how WAFL likes more spindles....)

shea
4,647 Views

Hi -

Thanks for the straight forward comments.

For file shares, it always 'depends' what dedupe percentages you get. It all comes down to unique vs identical blocks in a FlexVol. Sounds like you've likely done a good job of asking your user base not to continually save copies of files all over. Congratulations - if you can bottle that secret, we can go into business together and make loads of money! I will note that the number seems a bit on the low side of what is possible to me.

For your VMware environment, expect to see excellent dedupe ratios - with one reasonable caveat - if you follow our best practices, your savings will be excellent. Leaving off one or two best practices and your mileage will drop off accordingly. Please post back when you implement it and let us know what dedupe ratios you see. I think you will be very pleasantly surprised!

To your point on Marketing - is it in fact Marketing? Of course!! Our Marketing staff worked hard on the promise - but note this - we deliver on it too. But hey, the proof is in the pudding. I am not a Marketing guy, I have a long (too long) of a geek background. Marketing never bothered me though. I look around my house and I cannot find a single thing I own that some good Marketing guy did not first inform me on how life would be better with it, than with out it! That includes my new 62 inch HDTV and the keg of Boddingtons I keep around

Cheers Mate! Looking forward to hearing back on your Virtualization project results.

mike

hdiseguros
4,647 Views

Hi Shea.

We are about to implement deduplication in our enviroment. Your article was a guide to my studies about this tecnology and how to implement it. I think the 50% garantee is reacheable and I' very happy to see that netapp is not only interested in selling new hardware, but optimize the hardware we already have. Another thing that makes me glad of choosing Netapp as our storage provider is the tigh relation with partner products, the best practice guides for vmware, microsoft exchange and others gives us a really helping hand in implementation of new systems.

Altough, one of my questions about this garantee is performance penalties. All white papers and documents listed in your article give us a understand about the tecnology, its pros and cons. I understand the performance issues with dedup, document TR-3505 explains it very well, but dont give a way to ease or minimize the impact, which leads me to seek for a solution, since one of ours aggregates has sata based disks, and our system are 30-50% cpu consumption in commercial hours and with some 60-70% at peak times.

Can performance penalties be minimezed with reallocate command ? What I understood is that performance issues came from cpu used to analize 4kb blocks of data, the concurrent i/o stream of change logs + metadata writes and volume fragmentation, since dedup erase the blocks with duplicated data.

Or, we have another way to ease the impact of dedup ?

Public