<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Re: Fractional Reserve on FlexClone in Data Protection</title>
    <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71005#M9340</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We have a source volume with following options:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; vol options virtual1sd1&lt;BR /&gt;nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,&lt;BR /&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on,&lt;BR /&gt;convert_ucode=off, maxdirsize=167690, schedsnapname=ordinal,&lt;BR /&gt;fs_size_fixed=off, compression=off, &lt;STRONG&gt;guarantee=volume&lt;/STRONG&gt;, svo_enable=off,&lt;BR /&gt;svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,&lt;BR /&gt;no_i2p=off, &lt;STRONG&gt;fractional_reserve=5&lt;/STRONG&gt;, extent=off, try_first=volume_grow,&lt;BR /&gt;read_realloc=off, snapshot_clone_dependency=off&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; df -Vh virtual1sd1&lt;BR /&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;BR /&gt;/vol/virtual1sd1/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;15GB&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 10GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4583MB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 70%&amp;nbsp; /vol/virtual1sd1/&lt;BR /&gt;/vol/virtual1sd1/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 720KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/virtual1sd1/.snapshot&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; snap list virtual1sd1&lt;BR /&gt;Volume virtual1sd1&lt;BR /&gt;working...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;BR /&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;BR /&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; May 10 13:13&amp;nbsp; snap_vgksdss&amp;nbsp;&amp;nbsp; (busy,vclone)&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;After snap created and connected following occurs:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:47 CDT [napp1: wafl.volume.clone.fractional_rsrv.changed:info]: Fractional reservation for clone 'Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot' was changed to 100 percent because guarantee is set to 'file' or 'none'.&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:51 CDT [napp1: wafl.volume.clone.created:info]: Volume clone Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot of volume virtual1sd1 was created successfully.&lt;/P&gt;&lt;P&gt;Creation of clone volume 'Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot' has completed.&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:51 CDT [napp1: wafl.vol.full:notice]: file system on volume Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot is full&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:51 CDT [napp1: lun.newLocation.offline:warning]: LUN /vol/Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot/lun1 has been taken offline to prevent map conflicts after a copy or move operation.&lt;/P&gt;&lt;P&gt;Mon May 10 13:16:19 CDT [napp1: lun.map:info]: LUN /vol/Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot/lun1 was mapped to initiator group xvirt1=3&lt;/P&gt;&lt;P&gt;Mon May 10 13:16:23 CDT [napp1: wafl.vol.full:notice]: file system on volume Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot is full&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now, if I had vol autosize on the source volume, I would be spammed with vol autosize failure. If I had snap autodelete on the parent I would see that the containing snapshot would be deleted after flexclone volume creation. I have tried fractional_reserve at 0 and at 20 with same result. Initially my volume was 10.1GB and the containing lun 10GB. It is a space reserved LUN because I want to guarantee writes and not have it go offline (this happened earlier in our migration and I was not too happy about it).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So now on my test volume I have 50% free space in the volume and I am having errors from SnapDrive reporting a metadata write error. SnapDrive then strands the clone and can no longer interact with it. It requires Storage Admin to go in and unmap the devices and destroy the flexclone volume and delete the snapshots. The storage commands from Snap Drive also incorrectly report the status of the flexclone volume, making it appear to be split from the backing snapshot, even though I see differently at the filer level.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Mon, 10 May 2010 18:36:27 GMT</pubDate>
    <dc:creator>james_f_hall</dc:creator>
    <dc:date>2010-05-10T18:36:27Z</dc:date>
    <item>
      <title>Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70984#M9335</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I am having an issue with Snap Drive for Unix 4.1.1 and creating a FlexClone volume off a snapshot so that we can backup up that volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have two parent volumes, on one each filer head. The vol options are such:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; vol options xdb3vol0&lt;BR /&gt;nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,&lt;BR /&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on,&lt;BR /&gt;convert_ucode=off, maxdirsize=167690, schedsnapname=ordinal,&lt;BR /&gt;fs_size_fixed=off, compression=off, guarantee=volume, svo_enable=off,&lt;BR /&gt;svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,&lt;BR /&gt;no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow,&lt;BR /&gt;read_realloc=off, snapshot_clone_dependency=off&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;A snap was created yesterday:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;BR /&gt;/vol/xdb3vol0/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1850GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1754GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 96GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 95%&amp;nbsp; /vol/xdb3vol0/&lt;BR /&gt;/vol/xdb3vol0/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 462GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 63GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 399GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 14%&amp;nbsp; /vol/xdb3vol0/.snapshot&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;But when the AIX admin tries to connect to the snapshot and create a flexclone volume we get the following error:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:06 CDT [napp1:&amp;nbsp; wafl.volume.clone.fractional_rsrv.changed:info]: Fractional reservation for&amp;nbsp; clone 'Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot' was changed&amp;nbsp; to &lt;STRONG&gt;100 percent because guarantee is set to&amp;nbsp; 'file' or 'none'&lt;/STRONG&gt;.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; wafl.volume.clone.created:info]: Volume clone&amp;nbsp; Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot of volume xdb3vol0&amp;nbsp; was created successfully.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Creation of clone volume&amp;nbsp; 'Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot' has&amp;nbsp; completed.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun13 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun12 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun11 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun10 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun15 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun16 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:11 CDT [napp1:&amp;nbsp; lun.newLocation.offline:warning]: LUN&amp;nbsp; /vol/Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot/lun14 has been&amp;nbsp; taken offline to prevent map conflicts after a copy or move&amp;nbsp; operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:19 CDT [napp1:&amp;nbsp; wafl.vol.autoSize.done:info]: Automatic increase size of volume&amp;nbsp; 'Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot' by 70464308 kbytes&amp;nbsp; done.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Sun May&amp;nbsp; 2 15:14:31 CDT [napp1:&amp;nbsp; wafl.vol.autoSize.fail:info]: &lt;STRONG&gt;Unable to grow&amp;nbsp; volume 'Snapdrive_xdb3vol0_volume_clone_from_snap_vgmfgdss_snapshot' to recover&amp;nbsp; space: Volume cannot be grown beyond maximum growth&amp;nbsp; limi&lt;/STRONG&gt;t&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If this a flexclone that is backed by a snapshot, then shouldn't it utilize the snapshot for any deltas? What changes do I need to make to the parent volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What is the point of having snapshot and flxeclone if I have to have 100% of the space reserved for it? If, for some unforseen reason, more than 20% of data were to change, I wouldn't mind the snap and flexclone being auto deleted as long as the parent volume and it's LUNs stay online. It would seem to be an inefficient use of resources if I have to have 100% fractional reserve.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 07:14:50 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70984#M9335</guid>
      <dc:creator>james_f_hall</dc:creator>
      <dc:date>2025-06-05T07:14:50Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70987#M9336</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I am still having the issue:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;05/07/10 10:20:17&amp;nbsp; STATUS:INFORM&amp;nbsp; ERRCODE:999 connect_snap:&amp;nbsp; Snap connect&amp;nbsp; started.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt; connecting&amp;nbsp; vgksdss:&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; connecting lun&amp;nbsp; napp1:/vol/virtual1sd1/lun1&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;creating unrestricted volume clone&lt;/STRONG&gt; napp1:/vol/Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot ...&amp;nbsp; success&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; connecting lun&amp;nbsp; napp2:/vol/virtual1sd2/lun2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;creating unrestricted volume clone&lt;/STRONG&gt; napp2:/vol/Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot ...&amp;nbsp; success&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; mapping new&amp;nbsp; lun(s) ... done&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; discovering new&amp;nbsp; lun(s) ... done&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Importing&amp;nbsp; vgksdssbc&lt;/P&gt;&lt;P&gt;Successfully connected&amp;nbsp; to snapshot napp1:/vol/virtual1sd1:snap_vgksdss&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; disk group&amp;nbsp; vgksdssbc containing host volumes&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; bclvksdss_log&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; bclvksdss_fs (filesystem: /bc/FS1)&lt;/P&gt;&lt;P&gt;0002-245 Command error:&amp;nbsp; &lt;STRONG&gt;cannot write Flexclone metadata&lt;/STRONG&gt; to&amp;nbsp; napp1:/vol/Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot storage&amp;nbsp; system volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;05/07/10 10:22:52&amp;nbsp; STATUS:ERROR&amp;nbsp; ERRCODE:32 connect_snap: snapdrive connect failed with return code&amp;nbsp; 18.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;from_snap_vgksdss_snapshot' was changed to 100 percent because guarantee is set to 'file' or 'none'.&lt;BR /&gt;Fri May&amp;nbsp; 7 10:22:51 CDT [napp2: wafl.volume.clone.created:info]: Volume clone Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot of volume virtual1sd2 was created successfully.&lt;BR /&gt;Creation of clone volume 'Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot' has completed.&lt;BR /&gt;Fri May&amp;nbsp; 7 10:22:51 CDT [napp2: wafl.vol.full:notice]: &lt;STRONG&gt;file system on volume Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot is full&lt;/STRONG&gt;&lt;BR /&gt;Fri May&amp;nbsp; 7 10:22:51 CDT [napp2: lun.newLocation.offline:warning]: LUN /vol/Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot/lun2 has been taken offline to prevent map conflicts after a copy or move operation.&lt;BR /&gt;Fri May&amp;nbsp; 7 10:23:04 CDT [napp2: lun.map:info]: LUN /vol/Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot/lun2 was mapped to initiator group xvirt1=4&lt;BR /&gt;Fri May&amp;nbsp; 7 10:23:08 CDT [napp2: wafl.vol.full:notice]: &lt;STRONG&gt;file system on volume Snapdrive_virtual1sd2_volume_clone_from_snap_vgksdss_snapshot is full&lt;/STRONG&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 07 May 2010 15:31:57 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70987#M9336</guid>
      <dc:creator>james_f_hall</dc:creator>
      <dc:date>2010-05-07T15:31:57Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70992#M9337</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Have you got vol autosize on source volume? Also, what is your snap reserve?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As a non-intrsuive test set snap reserve to ZERO, try again. Set it back to what it was before after the test.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 10 May 2010 02:39:09 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70992#M9337</guid>
      <dc:creator>eric_barlier</dc:creator>
      <dc:date>2010-05-10T02:39:09Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70997#M9338</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Yes. I set vol autosize on source volume, and I have turned it off in a "test" environment. I have tried with snap reserve set to 0 and same result. snap reserve at 0 and fractional reserve at 20 and same result. Not sure what I am missing here. There is no clear guide as to what I should have set for this to work, other than what appears to be trial and error. I don't like trial and error very much as it gives the illusion I don't know what I am doing. Which might be true in this case, but it isn't for lack of ingesting hundreds of postings, various TR readings and endless google searches plus a case open for over a week with IBM on the issue. They seem less knowledgeable than I in this circumstance.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now, I have been able to get it work unreliably and still with errors on the filer side,&amp;nbsp; but luckily with no fault to the source volume by setting the snap reserve to 0, fractional reserve to 20 and throwing a couple hundred more GBs at the volume. This has let me get the required blocks off to tape before we destroy the flexclone volume and delete the snapshot.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;When the snapshot has been taken and the flexclone active, I go from close to 700GB free in each volume (1.7TB used volumes) to about 300GB free. Where did the 400GB go? It seems like I am requiring a lot more space to make these operations successful. If that is the case, so be it, we need it to work but it wasn't the behavior we were expecting.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now, if we had been told that to make FlexClone's work we would need 220GB volume for 100GB LUN (like some articles suggest) then I would have asked long ago what is the value in the NetApp over say a similar Tier2 array that has snap reserve pools? Instead we are told that FlexClones are thin provisioned and require no additional space other than the snapshot that keeps track of all changes between the current state of the AFS and the time the FlexClone was generated off the backing snapshot. Now I agree that things can get hairy pretty quickly if we were then going to the FlexClone and making adjustments to it.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 10 May 2010 12:39:46 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/70997#M9338</guid>
      <dc:creator>james_f_hall</dc:creator>
      <dc:date>2010-05-10T12:39:46Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71000#M9339</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;What an interesting example of bad interaction of different features &lt;SPAN __jive_emoticon_name="happy" __jive_macro_name="emoticon" class="jive_macro jive_emote" src="https://community.netapp.com/4.0.6/images/emoticons/happy.gif"&gt;&lt;/SPAN&gt; I guess NetApp will have to eventually provide explicit autosize control during FlexClone creation just like the one for volume guarantees.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now my question - does it actually make clone connect to fail or is it just cosmetic issue? Because testing it (without SD involved) I can reproduce this behaviour, but clone is created, I can online LUN and clone actually does not consume any space in aggregate (even though it seems to):&lt;/P&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;Mon May 10 18:09:46 MSD [wafl.vol.autoSize.fail:info]: Unable to grow volume 'test1_clone1' to recover space: Volume cannot be grown beyond maximum growth limit&lt;/SPAN&gt;&lt;P&gt;&lt;/P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;simsim*&amp;gt; lun show&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/test1/lun1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 70m (73400320)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (r/w, online)&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/test1_clone1/lun1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 70m (73400320)&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; (r/w, online)&lt;BR /&gt;simsim*&amp;gt; df -r&lt;BR /&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; kbytes&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail&amp;nbsp;&amp;nbsp; reserved&amp;nbsp; Mounted on&lt;BR /&gt;/vol/test1/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 102400&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 72028&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 30372&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/test1/&lt;BR /&gt;/vol/test1/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 40&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/test1/.snapshot&lt;BR /&gt;/vol/test1_clone1/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 122880&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 122880&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp; (71832) /vol/test1_clone1/&lt;BR /&gt;/vol/test1_clone1/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 52&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/test1_clone1/.snapshot&lt;BR /&gt; &lt;/SPAN&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;simsim*&amp;gt; aggr show_space aggr0&lt;BR /&gt;Aggregate 'aggr0'&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp; Total space&amp;nbsp;&amp;nbsp;&amp;nbsp; WAFL reserve&amp;nbsp;&amp;nbsp;&amp;nbsp; Snap reserve&amp;nbsp;&amp;nbsp;&amp;nbsp; Usable space&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; BSR NVLOG&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; A-SIS&lt;BR /&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1024000KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 102400KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 46080KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 875520KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0KB&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;Space allocated to volumes in the aggregate&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;Volume&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Allocated&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Guarantee&lt;BR /&gt;test1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 103156KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 72492KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; volume&lt;BR /&gt;&lt;STRONG&gt;test1_clone1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1236KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 408KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; none&lt;/STRONG&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;BR /&gt;&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;Also notice, that this issue happens only if you create non-space reserved clone. If your clone is space reserved, fractional_reserve is not forced to 100. You do need extra space in agregate though:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;test1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 103156KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 72736KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; volume&lt;BR /&gt;test1_clone1&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 31168KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 160KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; volume&lt;BR /&gt;&lt;/PRE&gt;&lt;P&gt;The space you will need is exactly free space in parent volume, which is somehow logical &lt;SPAN __jive_emoticon_name="happy" __jive_macro_name="emoticon" class="jive_macro jive_emote" src="https://community.netapp.com/4.0.6/images/emoticons/happy.gif"&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Oh, and BTW answering your question:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;P&gt;When the snapshot has been taken and the flexclone active, I go from close to 700GB free in each volume (1.7TB used volumes) to about 300GB free. Where did the 400GB go?&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;&lt;BR /&gt;They have been reserved by virtue of fractional_reserve being set to 100. In my example above you see that it tries to reserve 70M contained in base snapshot. It fails to do it, but because clone space guarantee is "none", NetApp ignores this error and let me continue.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 10 May 2010 14:19:16 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71000#M9339</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2010-05-10T14:19:16Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71005#M9340</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We have a source volume with following options:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; vol options virtual1sd1&lt;BR /&gt;nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,&lt;BR /&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on,&lt;BR /&gt;convert_ucode=off, maxdirsize=167690, schedsnapname=ordinal,&lt;BR /&gt;fs_size_fixed=off, compression=off, &lt;STRONG&gt;guarantee=volume&lt;/STRONG&gt;, svo_enable=off,&lt;BR /&gt;svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,&lt;BR /&gt;no_i2p=off, &lt;STRONG&gt;fractional_reserve=5&lt;/STRONG&gt;, extent=off, try_first=volume_grow,&lt;BR /&gt;read_realloc=off, snapshot_clone_dependency=off&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; df -Vh virtual1sd1&lt;BR /&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;BR /&gt;/vol/virtual1sd1/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;STRONG&gt;15GB&lt;/STRONG&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 10GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4583MB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 70%&amp;nbsp; /vol/virtual1sd1/&lt;BR /&gt;/vol/virtual1sd1/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 720KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0KB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/virtual1sd1/.snapshot&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;napp1&amp;gt; snap list virtual1sd1&lt;BR /&gt;Volume virtual1sd1&lt;BR /&gt;working...&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp; %/used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; %/total&amp;nbsp; date&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; name&lt;BR /&gt;----------&amp;nbsp; ----------&amp;nbsp; ------------&amp;nbsp; --------&lt;BR /&gt;&amp;nbsp; 0% ( 0%)&amp;nbsp;&amp;nbsp;&amp;nbsp; 0% ( 0%)&amp;nbsp; May 10 13:13&amp;nbsp; snap_vgksdss&amp;nbsp;&amp;nbsp; (busy,vclone)&lt;BR /&gt;napp1&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;After snap created and connected following occurs:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:47 CDT [napp1: wafl.volume.clone.fractional_rsrv.changed:info]: Fractional reservation for clone 'Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot' was changed to 100 percent because guarantee is set to 'file' or 'none'.&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:51 CDT [napp1: wafl.volume.clone.created:info]: Volume clone Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot of volume virtual1sd1 was created successfully.&lt;/P&gt;&lt;P&gt;Creation of clone volume 'Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot' has completed.&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:51 CDT [napp1: wafl.vol.full:notice]: file system on volume Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot is full&lt;/P&gt;&lt;P&gt;Mon May 10 13:15:51 CDT [napp1: lun.newLocation.offline:warning]: LUN /vol/Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot/lun1 has been taken offline to prevent map conflicts after a copy or move operation.&lt;/P&gt;&lt;P&gt;Mon May 10 13:16:19 CDT [napp1: lun.map:info]: LUN /vol/Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot/lun1 was mapped to initiator group xvirt1=3&lt;/P&gt;&lt;P&gt;Mon May 10 13:16:23 CDT [napp1: wafl.vol.full:notice]: file system on volume Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot is full&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now, if I had vol autosize on the source volume, I would be spammed with vol autosize failure. If I had snap autodelete on the parent I would see that the containing snapshot would be deleted after flexclone volume creation. I have tried fractional_reserve at 0 and at 20 with same result. Initially my volume was 10.1GB and the containing lun 10GB. It is a space reserved LUN because I want to guarantee writes and not have it go offline (this happened earlier in our migration and I was not too happy about it).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So now on my test volume I have 50% free space in the volume and I am having errors from SnapDrive reporting a metadata write error. SnapDrive then strands the clone and can no longer interact with it. It requires Storage Admin to go in and unmap the devices and destroy the flexclone volume and delete the snapshots. The storage commands from Snap Drive also incorrectly report the status of the flexclone volume, making it appear to be split from the backing snapshot, even though I see differently at the filer level.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 10 May 2010 18:36:27 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71005#M9340</guid>
      <dc:creator>james_f_hall</dc:creator>
      <dc:date>2010-05-10T18:36:27Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71009#M9341</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Please, show&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;df -A&lt;/SPAN&gt; for aggregate containing virtual1sd1&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;aggr show_space&lt;/SPAN&gt; for aggregate containing virtual1sd1&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;vol options Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;df -r Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;df -h Snapdrive_virtual1sd1_volume_clone_from_snap_vgksdss_snapshot&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Mon, 10 May 2010 19:38:37 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71009#M9341</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2010-05-10T19:38:37Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71014#M9342</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I bumped into the same issue today, an AIX install too, 5% FR and 20%SR&lt;/P&gt;&lt;P&gt;flexclone would fail with the same writing metadata error.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;In this case the flexvols were full, 100% utilized as the customer had created LUNs of the same size of the flexvol and had intended to use the 20%SR for the snapshots. But reducing the SR to 0% to free space didn't help as the same SDU operation would fail during the metadata write operation.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Quick fix was to modify snapdrive.conf to create a lunclone instead of a flexclone.&lt;/P&gt;&lt;P&gt;For this customer it wouldn't make any difference as they plan to refresh the luns (SMO clone) every night.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;may have to create a burt for that SDU behavior, SDU shouldn't be touching the %FR values unless we tell it to,&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 27 May 2010 15:18:33 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71014#M9342</guid>
      <dc:creator>jcosta</dc:creator>
      <dc:date>2010-05-27T15:18:33Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71019#M9343</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;I don't think the issue here is directly related to Snapdrive and it's interaction with Flexclones. This seems to be a fundamental change in the way Flexclones, and their reserves (volume, fractional, snap) behave with Ontap &amp;gt;= 7.2.6.1. Here is my example of the problem. Note the 'volume' guarantee, and fractional reserve set to 'zero' on the source volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;egna10a&amp;gt; df -A egna10a_aggr01&lt;/P&gt;&lt;P&gt;Aggregate&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; kbytes&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&lt;/P&gt;&lt;P&gt;egna10a_aggr01&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 10154150324 6355373044 3798777280&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 63%&lt;/P&gt;&lt;P&gt;egna10a_aggr01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&lt;/P&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;egna10a&amp;gt; df -g egna10a_vol001&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;/P&gt;&lt;P&gt;/vol/egna10a_vol001/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 5500GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4640GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 859GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 84%&amp;nbsp; /vol/egna10a_vol001/&lt;/P&gt;&lt;P&gt;snap reserve&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 19GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/egna10a_vol001/..&lt;/P&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;P&gt;&lt;/P&gt;&lt;DIV id="_mcePaste"&gt;egna10a&amp;gt; vol options egna10a_vol001&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;nosnap=on, nosnapdir=on, minra=off, no_atime_update=off, nvfail=off,&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on,&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;convert_ucode=off, maxdirsize=167690, schedsnapname=ordinal,&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;fs_size_fixed=off, guarantee=volume, svo_enable=off, svo_checksum=off,&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,&lt;/DIV&gt;&lt;DIV id="_mcePaste"&gt;fractional_reserve=0, extent=off, try_first=snap_delete&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;Now, I want to create a Flexclone volume and 'thin provision' it as to not take up 2 x's the space.&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;egna10a&amp;gt; vol clone create CLONE_egna10a_vol001 -s none -b egna10a_vol001 test_snap&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;Tue Jun&amp;nbsp; 8 12:32:49 PDT [egna10a: wafl.volume.clone.fractional_rsrv.changed:info]: Fractional reservation for clone 'CLONE_egna10a_vol001' was changed to 100 percent because guarantee is set to 'file' or 'none'.&lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt;Tue Jun&amp;nbsp; 8 12:33:07 PDT [egna10a: wafl.snaprestore.revert:notice]: Reverting volume CLONE_egna10a_vol001 to a previous snapshot.&lt;/DIV&gt;&lt;DIV&gt;Tue Jun&amp;nbsp; 8 12:33:08 PDT [egna10a: wafl.volume.clone.created:info]: Volume clone CLONE_egna10a_vol001 of volume egna10a_vol001 was created successfully.&lt;/DIV&gt;&lt;DIV&gt;Tue Jun&amp;nbsp; 8 12:33:23 PDT [egna10a: wafl.volume.snap.autoDelete:info]: Deleting snapshot 'test_snap' in volume 'CLONE_egna10a_vol001' to recover storage&lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;egna10a&amp;gt; vol options CLONE_egna10a_vol001&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;nosnap=on, nosnapdir=on, minra=off, no_atime_update=off, nvfail=off,&lt;/DIV&gt;&lt;DIV&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on,&lt;/DIV&gt;&lt;DIV&gt;convert_ucode=off, maxdirsize=167690, schedsnapname=ordinal,&lt;/DIV&gt;&lt;DIV&gt;fs_size_fixed=off, guarantee=none, svo_enable=off, svo_checksum=off,&lt;/DIV&gt;&lt;DIV&gt;svo_allow_rman=off, svo_reject_errors=off, no_i2p=off,&lt;/DIV&gt;&lt;DIV&gt;fractional_reserve=100, extent=off, try_first=snap_delete&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;Before the clone snapshot is automatically deleted the utilization on the clone is @ 100% and triggers critical snmp traps.&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;/DIV&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;This does NOT happen with Ontap 7.2.5.1. The cloned volume takes on ALL the source volumes attributes including the fractional reserve.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So it seems now the only real way to thin provision with Netapp is to set the volume guarantee on the source to 'none'. No way!&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 08 Jun 2010 19:37:58 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71019#M9343</guid>
      <dc:creator>greg_kemp</dc:creator>
      <dc:date>2010-06-08T19:37:58Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71024#M9344</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Greg,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You said: "&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;So it seems now the only real way to thin provision with Netapp is to&amp;nbsp; set the volume guarantee on the source to 'none'. No way!"&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;No way? Why not? We do it here, theres no problems with this as long as its done properly and managed a bit. We ve claimed back&lt;/P&gt;&lt;P&gt;20TB in our non-prod. env. doing this. thats $$$$$$ mate.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Tue, 08 Jun 2010 22:39:51 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71024#M9344</guid>
      <dc:creator>eric_barlier</dc:creator>
      <dc:date>2010-06-08T22:39:51Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71029#M9345</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;DIV&gt;How do you claim back space on the source volumes used for luns? The lun reserve comes into play. Do you turn that off as well? That may make sense in the VM world with ESX, OVM, etc.&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;My premiss for creating snapshots/clones is backups. I have a traditional SAN with Solaris hosts in a VCS cluster supporting production. I can't afford any risk on the source vols/luns themselves, and 100% uptime is required. Furthermore, I want to offload the work required for backups to a dedicated host to remove any host side performance degradation to the application.&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;Snapshots/Flexclones worked beautifully for this using 7.2.5.1. The source volumes were well protected (SG=volume, FR=0), while the clones themselves (SG=none, FR=0) required no extra space. Exception of course is snapshot growth, which is easily managed. This saved lots of $$.&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;In the past I only needed to be concerned with the rate of change (delta), and snapshot growth. Worst case if something went horribly wrong I could destroy the clones/snapshots to avoid disaster, and any potential impact to the source volumes/luns. (production)&lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt; &lt;/DIV&gt;&lt;DIV&gt;Ontap 7.2.6.1 effectively broke this functionality, and much of the motivation to keep buying Netapp. If I need 2 x's the space of the source for snapshots/clones most any vendor can accommodate these days.&lt;/DIV&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 23 Jun 2010 18:13:04 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71029#M9345</guid>
      <dc:creator>greg_kemp</dc:creator>
      <dc:date>2010-06-23T18:13:04Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71032#M9346</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;P&gt;Ontap 7.2.6.1 effectively broke this functionality&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;Guess what? It is not a bug, it is a feature &lt;SPAN __jive_emoticon_name="happy" __jive_macro_name="emoticon" class="jive_macro jive_emote" src="https://community.netapp.com/4.0.8/images/emoticons/happy.gif"&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="https://now.netapp.com/cgi-bin/bol?Type=Detail&amp;amp;Display=280845" target="_blank"&gt;https://now.netapp.com/cgi-bin/bol?Type=Detail&amp;amp;Display=280845&lt;/A&gt;&lt;SPAN&gt; :&lt;/SPAN&gt;&lt;/P&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;P&gt;Vol clone incorrectly allows fractional reserve to be set to 0 and&amp;nbsp; guarantee to be set to 'none' or 'file'&lt;/P&gt;&lt;/PRE&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 24 Jun 2010 18:46:19 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71032#M9346</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2010-06-24T18:46:19Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71036#M9347</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;yea, a feature that was added without any thought. See the following bug &lt;SPAN __jive_emoticon_name="wink" __jive_macro_name="emoticon" class="jive_macro jive_emote" src="https://community.netapp.com/4.0.8/images/emoticons/wink.gif"&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A class="jive-link-external-small" href="http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&amp;amp;Display=348466" target="_blank"&gt;http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&amp;amp;Display=348466&lt;/A&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Still flawed though. How the heck can you reset the FR back to zero if there isn't enough space to first create the clones?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;What a mess. I am actively working this with the Netapp Dudes. Hopefully they will get it straight down the road.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 24 Jun 2010 19:52:10 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71036#M9347</guid>
      <dc:creator>greg_kemp</dc:creator>
      <dc:date>2010-06-24T19:52:10Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71041#M9348</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Greg,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I ve read the last posting and I ve got a better understanding of your issue now. I wasnt aware of this, so thanks for flagging it.&lt;/P&gt;&lt;P&gt;I had a look at the bug you provided, its not a serious bug per NetApp at least:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;A href="http://now.netapp.com/NOW/knowledge/docs/defs/boldefs.shtml#severity" target="_blank"&gt;Bug Severity&lt;/A&gt; 5 - Suggestion&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Which more or less deems it to be an RFE (request for enhancement) and the fix looks to be an upgrade of Ontap. Upgrades&lt;/P&gt;&lt;P&gt;are never painless I know..&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Do keep us up to date on this issue.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Eric&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 24 Jun 2010 22:08:21 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71041#M9348</guid>
      <dc:creator>eric_barlier</dc:creator>
      <dc:date>2010-06-24T22:08:21Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71047#M9349</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi Eric,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Yes, you are correct. Netapp really doesn't see it as a bug. The intent of the feature change was to provide a level of protection for over-writes on the clones themselves.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I really don't understand Netapp's change here. If there are customers that want to protect there clones, why not just create the clones with space guarantee set to volume, and Fractional Reserve set to 100%??&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;The point is.. to some the clones are &amp;lt;just&amp;gt; as important as the parent volumes, which is fine. Buy 2x's the disks and protect them just as they would the source volumes.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Instead, they took away functionality that saved money, and differentiated Netapp from the other vendors.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;greg&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 25 Jun 2010 01:19:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71047#M9349</guid>
      <dc:creator>greg_kemp</dc:creator>
      <dc:date>2010-06-25T01:19:56Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71052#M9350</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;It says fixed in 7.3.3. Do you know what actually has been fixed and did you have a chance to verify it? I do not remember seeing anything explicit about it in RN or documentation; browsing 7.3.3 manuals now, the statement that fractional reserve cannot be changed for file or none guaranteed volumes &lt;STRONG&gt;did&lt;/STRONG&gt; disappear.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;As for original problem ... well, it &lt;STRONG&gt;was&lt;/STRONG&gt; a bug, because even in 7.2.5.1 manual quite clear stated that FR is fixed to 100% unless volume guarantee is none.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 25 Jun 2010 03:03:19 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71052#M9350</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2010-06-25T03:03:19Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71058#M9351</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Initially you said..&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;Guess what? It is not a bug, it is a feature &lt;SPAN __jive_emoticon_name="happy"&gt;&lt;/SPAN&gt;&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Now you say..&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &amp;lt;As for original problem ... well, it &lt;STRONG&gt;was&lt;/STRONG&gt; a bug, because even in 7.2.5.1 manual quite clear stated that FR is fixed to 100% unless volume guarantee is none.&amp;gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Not sure what you are trying to convey with regard to 7.2.5.1? It works as expected.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I have not tested 7.3.3 yet. It just came out of the oven. Sticking with 7.3.2P4 for a few more weeks.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;greg&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 25 Jun 2010 17:06:15 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71058#M9351</guid>
      <dc:creator>greg_kemp</dc:creator>
      <dc:date>2010-06-25T17:06:15Z</dc:date>
    </item>
    <item>
      <title>Re: Fractional Reserve on FlexClone</title>
      <link>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71061#M9352</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;We will be updating to 7.3.3 in August barring any showstoppers. I'll inform if the behavior has changed. We are living with the current "arrangement".&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 01 Jul 2010 18:35:56 GMT</pubDate>
      <guid>https://community.netapp.com/t5/Data-Protection/Fractional-Reserve-on-FlexClone/m-p/71061#M9352</guid>
      <dc:creator>james_f_hall</dc:creator>
      <dc:date>2010-07-01T18:35:56Z</dc:date>
    </item>
  </channel>
</rss>

