2009-02-12 11:40 AM
I posed this question at the user's group meeting yesterday in Reston.
- Solaris host operates 2 TB database
- Project wants a copy of the 2 TB database for development / testing purposes
- 7 days of snapshots are kept on database, 1 snap / day
- snapvault copy created every 24 hours
- vol gaurantees set to vol
- 100% lun space reservations
- Volume includes space for snaps (over 4TB total vol space)
Question: how can I use Flexclone, autogrow, vol space guarantees, snap reserve, etc., so that I can maintain operations to production database while offering a copy that's writable to another development server for dev / test purposes? KEY OBJECTIVES - I must create and configure the writable flexvol so that writes made by dev / test server cannot exceep X amount (e.g., 200 GB). Production snaps and snapvault must go unaltered so SLA supported for 7 day retention and nightly snapvault.
Jesse Thaloor - you were going to try a test and let us know?
2009-02-12 12:28 PM
I did some testing last night using auto-grow configuration on the flex-cloned volume and there is one thing I have to verify before putting out a definitive answer on this. . Auto-grow appears to work, but it has raised some questions on how aggregate space is utilized with clones with auto-grow and fractional-reserve set to 0.
Stay tuned and I will have a full description in a day or so.
2009-02-13 06:01 AM
So the Answer is YES, the functionality will work as shown below:
Ontap 7.3.1 (Pretty sure this will work the same in any release on or after 7.2.4)
Fractional Reserve to ZERO (0)
Auto Grow set on the cloned volume
No other lun based options have been set (so they are all defaults)
(In the logs you will see two filers DR2 and Lincoln. The procedure was tested on two filers so the logs may come from two controllers but the results are the same)
See the whole work through below:
# Create a new volume of size 1GB
DR2> vol create testvol aggr1 1g
# disable automatic snapshots
DR2> vol options testvol nosnap on
# set fractional reserve to 0
DR2> vol options testvol fractional_reserve 0
# check vol options
DR2> vol options testvol
nosnap=on, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=off,
convert_ucode=off, maxdirsize=31457, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow,
# set snap reserve to 0
DR2> snap reserve testvol 0
2009-02-17 07:18 PM
Hi Doug and Jesse
Is it still best practice to set fractional space reserve to 100%, or is it a better practice to set fractional space reserve to 0% and use vol auto grow and automatic snapshot delete to ensure writes to a lun? 100% fractional reserve is tough to explain to clients, since they have trouble understanding why they have to set aside so much "extra" capacity when dealing with luns.
I understand that you can use a lower % but you better be sure what the change rate is or you'll run into problems. What are your thoughts on setting fractional space reserve to 0% and useing the tools described above? Will automatically growing the volume and/or automatically deleting snapshots be enough to ensure space for overwrites?
For example, if I created a lun of 100GB, wrote 100GB of data to the lun and took a snapshot I'd be fine for that initial snapshot (no extra space in the volume used). If every single block within that lun changed I'd need to have available space to write those changed blocks, since the original blocks are locked by a snapshot. Obviuosly, that's where the fractional space reserve comes into play - guards against the worst case scenerio of every block changing in the lun.
I realize this is not likely the case with most apps and some lower % value could be chosen.
A colleaugue of mine has told me not to worry about setting fractional reserve to 100% or even any number other then 0% because of the ability to auto grow the volume and/or delete snapshots. Is this a safe way to architect volume sizing?
2009-02-17 08:03 PM
See Block Management with Data ONTAP 7G: FlexVol, FlexClone, and Space Guarantees for the latest on block management. To summarize, the fractional reserve set to 100% for the entire lifecycle of a lun is no longer necessary. There are some risks associated with setting this to less than 100. So as long as you know the risks, you can use autogrow/autodelete to effectively hedge the risk. In addition, Parts of the lifecycle of a LUN may need 100% fractional reserve (like when a new application is deployed with no data on the change rate) but as the application matures, the reserve can be tuned to the absolute minimum if necessary.
2009-02-17 08:44 PM
Hi Jesse. Thanks for the TR. I understand about fractional reserve not having to be set to 100%. Just to clarify, my question would be is it now a better practice to set the fractinal reserve to 0% and use the auto grow and/or snap auto delete options to guarantee space for overwrites? I've been told that this is infact now a better practice, but I haven't gotten an official answer yet.
It can be difficult to win deals against other storage vendors if 100% fractional reserve has to be included in the storage sizing. That can really amoun to a lot of extra storage. It's difficult to assess a true fractional reserve figure, especially if their just isn't data to draw from to make that calculation. So, 100% is safe, but customers don't want to hear that they have to purchase X amount of storage now, and then over time can reduce fractional reserve when they have a better understanding of what the "true" figure should be. The sale is usually lost at that point. I know about selling the value add of a Netapp solution, but let's assume the deal is coming down to how much capacity the client needs to purchase.
So, once again, is it now best practice to set fractional reserve to 0% and use the mechanisms I've described (auto grow, auto delete) to guarantee overwrites?
2009-02-17 09:14 PM
I'm looking at that output and have a few questions.
2009-02-18 02:22 PM
Frankly I did not track the aggregate space since it was being actively used by other applications and users at the same time. So the aggregate space discrepancy is accounted for. I will try this same example on a dedicated aggregate in a few days and post it's results back.
As for the auto grow option. Yes, an increment in the storage occurs as soon as it is enabled for the first time. The volume grow 4 times in the process including the first time.
The space of the clone comes from the aggregate. When a clone is created, it uses the backing snapshot for reads. New writes to the clone take up space in the aggregate. The clone has about 150 MB (which is the first auto growth) to write to and an additional 150 MB (auto grow capacity available). So if writes to the flexclone exceeds the 300 MB (the writes are not to new space in the filesystem but an update to a block in an existing file) then the volume would fill up and the lun will go offline. Precisely what happened.
The numbers do add up to be correct.
Hope I cleared things up.