Subscribe

Quotas and Dedupe

Not SAN specific, but probably useful.  We use quotas for finding disk hogs and have deduped our volumes.  The quotas aren't based off of the deduped size, so how do I use my saved space?

Re: Quotas and Dedupe

Judson,

You can use your saved space in 2 ways. You can either reduce the size of your volumes, based on saved deduped space, or you will also have the ability to create new volumes based on saved deduped space, perhaps not space guaranteed (again, based on estimates/projections based on dedupe savings.)

Note that if you are using quotas in a VMware/ESX environment, having quotas enabled will allow VC to only see the storage capacity undeduped. With quotas disabled, VCC sees the volume deduped amount. HTH.

Re: Quotas and Dedupe

What I had been told during my questions to NetApp support that I couldn't oversubscribe my quotas.  To me this meant that I had to base my quotas on the undeduped data.  Was I wrong in my understanding.  What I'm attempting to ascertain is if I can increase my quotas by the amount of saved space on those volumes which are running quotas.

Re: Quotas and Dedupe

That's right about not able to oversubscribe on quotas (can't set quota to 110G if your vol only has 100G). So you have to find other data that you can put in those volumes - this is a way you can get more out of your deduped volumes.

You say "What I'm attempting to ascertain is if I can increase my quotas by the amount of saved space on those volumes which are running quotas."

I think you mean to ask if you can DEcrease your quotas? Well, the clients do not get direct benefit of deduped data - meaning, they can't fill up a volume with 100G of data and have it taken down to 50G just because there are common blocks of that data across the deduped volume and then freely write 50G more data. Dedupe is more for the storage admin/company overall to gain from. You have other space on there you can now do something with.

I see what you're going to think next - dedupe and quotas are a tough mix. At this time, I agree. Have you tried soft quotas? I haven't.

Another thing to try is disable volume guarantee, and make the volume as big as you would need it based on your quota total amount. Adds complication a bit and you'd want to manage your storage based on reporting on other numbers (say, a quota report).

Would be interesting to see what you come up with. Me I turned quota off on the volumes I had to work with. I manage it other ways (report on vol usage). You may not have that luxury.

Re: Quotas and Dedupe

I need the hard quotas so one user can't use all of the storage.  Our impression of dedupe was that it would help us reclaim space being used by duplicate blocks to possibly return to the user as percieved added space by raising the quotas or some other means.  When you say, "...the clients do not get direct benefit of deduped data," I start thinking what benefit is dedupe?  I understand we can reduce the size of admin-ish things, but on the other hand we constantly need more storage.  So we are trying to find the most cost-effective way to do that as I'm sure everyone else is.  I would hope that one of the Experts would chime in and have some input.

Re: Quotas and Dedupe

As stated in the dedupe deployment and implementation guide, TR-3505,

When deduplication is used in an environment where quotas are used, the quotas cannot be oversubscribed on a volume. For example, a user with a quota limit of 1TB can’t store more than 1TB of data in a deduplicated volume even if this data fits into less than 1TB of physical space on the storage system. Storage administrators can use the saved space as desired.

This means that if you want to use the freed space for storing data you will need to increase the quota.  Hopefully someone like Sajan can chime in with best practices that should be taken into consideration for using quotas.

Re: Quotas and Dedupe

As the documentation says one can't oversubscribe the quotas.  Is this set in stone or is it just inadvisable?  If we could oversubscribe them by a percentage (with some wiggle room I would hope), we could get some use out of the saved space.  For example, if qtrees /vol/jobs /vol/userdirs were on vol1.  Vol1 is 2TB and the tree quotas for both qtrees add up to 2TB.  Vol1 is 96% full.  Now we run dedup and get 20% savings on /vol/jobs and 15% on /vol/userdirs.  After the snapshots and snapvaults clear and show the savings, can we raise the quota on /vol/jobs by 10% and on /vol/userdirs by 8%.  My thinking is that if the quota can be oversubscribed, we could gain some user usable space by doing so.  Then as more data gets put on, it will be deduped as well and being similar data should be reduced by approximately the amount of the previous savings.

Is this possible? or are we just stuck being low on space.  Unless there is a software limitation to having the quota not exceed the volume size or will the OS allow a forced oversubscription (not really oversubscribed if it doesn't run out of space).  The only problem I would see with this is if a large restore was needed and the deduped data were taking up too much of the volume to restore other data (I'm assuming a tape backup would be undeduped.)

Re: Quotas and Dedupe

As the documentation says, you can't do it. ;-)

Say you have

Volume: /vol/vol1               100G

Qtree: /vol/vol1/jobs           

Qtree: /vol/vol1/userdirs

The sum of the quotas on /vol/vol1/jobs and /vol/vol1/userdirs have to come to 100G or less.

If you attempt to make

/vol/vol1/jobs 50G

/vol/vol1/userdirs 51G

You will get error message telling you no-no and it will disable quotas on that volume until you change your quota.

Re: Quotas and Dedupe

Also dedupe works at the volume level so it does not really know how much savings has been achieved at each qtree level and hence we report savings at the volume level.

Re: Quotas and Dedupe

I believe the main way to handle this would be oversubscription at the volume level (i.e. you can now the volume to larger than it would have been previously or create other volumes using the deduped space).