<?xml version="1.0" encoding="UTF-8"?>
<rss xmlns:content="http://purl.org/rss/1.0/modules/content/" xmlns:dc="http://purl.org/dc/elements/1.1/" xmlns:rdf="http://www.w3.org/1999/02/22-rdf-syntax-ns#" xmlns:taxo="http://purl.org/rss/1.0/modules/taxonomy/" version="2.0">
  <channel>
    <title>topic Volume at 100%. Enable to connect. in VMware Solutions Discussions</title>
    <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36658#M3565</link>
    <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm new in this forum. I have a problem with a volume shared via iSCSI on my NetApp &lt;SPAN&gt;Data ONTAP Release 7.3.3. The message that appears on the web interface is this:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt; /vol/vol_nfs_backup_01 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The result is that when my Windows Server try to connecto to this LUN, via iSCSI, the LUn goes offline. Is there a way to free space via SSH (enabled) or in other manner?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thank you a lot and excuse for my english.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Matrix1970&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
    <pubDate>Thu, 05 Jun 2025 06:46:28 GMT</pubDate>
    <dc:creator>MATRIX1970</dc:creator>
    <dc:date>2025-06-05T06:46:28Z</dc:date>
    <item>
      <title>Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36658#M3565</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I'm new in this forum. I have a problem with a volume shared via iSCSI on my NetApp &lt;SPAN&gt;Data ONTAP Release 7.3.3. The message that appears on the web interface is this:&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt; /vol/vol_nfs_backup_01 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;The result is that when my Windows Server try to connecto to this LUN, via iSCSI, the LUn goes offline. Is there a way to free space via SSH (enabled) or in other manner?&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Thank you a lot and excuse for my english.&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN&gt;Matrix1970&lt;BR /&gt;&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 05 Jun 2025 06:46:28 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36658#M3565</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2025-06-05T06:46:28Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36663#M3566</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Volume can't be shared via iSCSI; only LUN (which is effectively file on a volume) can.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please show output of commands on filer:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;df -h vol_nfs_backup_01&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;df -r vol_nfs_backup_01&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;vol options vol_nfs_backup_01&lt;/SPAN&gt;&lt;/P&gt;&lt;P&gt;&lt;SPAN style="font-family: courier new,courier;"&gt;lun show -v&lt;/SPAN&gt;&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 07 Sep 2011 12:28:51 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36663#M3566</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2011-09-07T12:28:51Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36671#M3567</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;This is probably due to a protection policy that the NetApp uses, when a volume is at 100% full, a LUN has no guaranteed space to write new data to, so to avoid any potential corruption it offlines the LUN, which is why you are unable to connect to it.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You need to free up some space in the volume in order to mount this LUN again. Either delete some old snapshots, or grow the volume.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 07 Sep 2011 12:34:29 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36671#M3567</guid>
      <dc:creator>chriskranz</dc:creator>
      <dc:date>2011-09-07T12:34:29Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36680#M3568</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thanks a lot for your help. I've deleted some snapshots and now it's all OK.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Matrix1970&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Wed, 07 Sep 2011 12:41:29 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36680#M3568</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-07T12:41:29Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36685#M3569</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;today I have the same problem. The error is the same: /vol/vol_nfs_backup_01 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;If I try to connect to iSCSI LUN, the free space is 325 GB, but when I try to run a backup job from Veeam, after some seconds the LUN goes offline. How can resolve this problem?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please help me.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Francesco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 08 Sep 2011 07:18:59 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36685#M3569</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-08T07:18:59Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36690#M3570</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Please show output of commands I have asked you before. It is impossible to help without knowing what’s going on.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 08 Sep 2011 07:21:01 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36690#M3570</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2011-09-08T07:21:01Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36694#M3571</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi, today I have the same problem.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Here the output of the command you want:&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;df -h vol_nfs_backup_01&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1024GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1009GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 14GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99%&amp;nbsp; /vol/vol_nfs_backup_01/&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 37GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/vol_nfs_backup_01/.snapshot&lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; &lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; df -r vol_nfs_backup_01&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; kbytes&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail&amp;nbsp;&amp;nbsp; reserved&amp;nbsp; Mounted on&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/ 1073741824 1058228508&amp;nbsp;&amp;nbsp; 15513316&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/vol_nfs_backup_01/&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp; 39793772&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/vol_nfs_backup_01/.snapshot&lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; &lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; vol options vol_nfs_backup_01&lt;/P&gt;&lt;P&gt;nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off, &lt;/P&gt;&lt;P&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on, &lt;/P&gt;&lt;P&gt;convert_ucode=off, maxdirsize=9175, schedsnapname=ordinal, &lt;/P&gt;&lt;P&gt;fs_size_fixed=off, compression=off, guarantee=none, svo_enable=off, &lt;/P&gt;&lt;P&gt;svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, &lt;/P&gt;&lt;P&gt;no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow, &lt;/P&gt;&lt;P&gt;read_realloc=off, snapshot_clone_dependency=off&lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; &lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; lun show -v&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/vol_nfs_backup_01/lun02&amp;nbsp;&amp;nbsp;&amp;nbsp; 1.0t (1099604782080) (r/w, online, mapped)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Comment: "Lun Backup Veeam"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Serial#: P4BpFZ/Xv2KW&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Share: none&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Space Reservation: disabled&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Multiprotocol Type: windows&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Maps: Backup_Veeam_01=2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/vol_vsphere_01/lun01&amp;nbsp;&amp;nbsp;&amp;nbsp; 1.0t (1099578736640) (r/w, online, mapped)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Comment: "lun vsphere"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Serial#: P4BpFZ/ReOly&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Share: none&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Space Reservation: disabled&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Multiprotocol Type: vmware&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Maps: VSphere_ESX_01=2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Any idea?&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 08 Sep 2011 07:35:46 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36694#M3571</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-08T07:35:46Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36698#M3573</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Well … you turned off space reservation everywhere and filled volume to the limits which means, there is nothing Data ONTAP can do to protect you from running out of space. You are solely responsible for monitoring available space and taking steps when it becomes low.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Please read TR-3483 which explains in details how space for LUN is managed on NetApp. In short, you must ensure that sum of LUN size and possible snapshot size during retention period does not exceed volume size. It does in your case.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;You have to decide what is more important for you – squeezing last byte out of NetApp or ensuring continues data availability. Personally I prefer the latter ☺.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;How full is file system in Windows? If there appears much free space you could try to run space reclamation on Windows, but there were some bugs resulting in data corruption, so I’d open support case to verify that you do run into it.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;For now the only way is to remove more snapshots, but you probably need to increase volume size anyway.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Thu, 08 Sep 2011 07:57:26 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36698#M3573</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2011-09-08T07:57:26Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36702#M3575</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;In Windows I see 325GB as free space. But NetApp show me that all is full (97%). I have deleted some snapshot, the LUN goes online, but after some write, it returns in offline state.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Under the Filer Report I see now (when the situation is "normal"):&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; kbytes&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;/P&gt;&lt;P&gt;/vol/vol0/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 31457280&amp;nbsp;&amp;nbsp;&amp;nbsp; 1139556&amp;nbsp;&amp;nbsp; 30317724&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 4%&amp;nbsp; /vol/vol0/&lt;/P&gt;&lt;P&gt;/vol/vol0/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 168608&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/vol0/.snapshot&lt;/P&gt;&lt;P&gt;/vol/vol_vsphere_01/ 1073741824&amp;nbsp; 510370232&amp;nbsp; 563371592&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 48%&amp;nbsp; /vol/vol_vsphere_01/&lt;/P&gt;&lt;P&gt;/vol/vol_vsphere_01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp; 43979116&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/vol_vsphere_01/.snapshot&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/ 1073741824 1042022800&amp;nbsp;&amp;nbsp; 31719024 &lt;STRONG&gt; 97%&lt;/STRONG&gt; /vol/vol_nfs_backup_01/&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp; 23588044&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/vol_nfs_backup_01/.snapshot&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Isn't possible to normalize this situation? This 97% where's? &lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Francesco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 09:38:14 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36702#M3575</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-09T09:38:14Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36706#M3577</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;here's the output&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;df -h vol_nfs_backup_01&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; total&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail capacity&amp;nbsp; Mounted on&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1024GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 1009GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 14GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 99%&amp;nbsp; /vol/vol_nfs_backup_01/&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 37GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0GB&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; ---%&amp;nbsp; /vol/vol_nfs_backup_01/.snapshot&lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; &lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; df -r vol_nfs_backup_01&lt;/P&gt;&lt;P&gt;Filesystem&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; kbytes&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; used&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; avail&amp;nbsp;&amp;nbsp; reserved&amp;nbsp; Mounted on&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/ 1073741824 1058228508&amp;nbsp;&amp;nbsp; 15513316&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/vol_nfs_backup_01/&lt;/P&gt;&lt;P&gt;/vol/vol_nfs_backup_01/.snapshot&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp; 39793772&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; 0&amp;nbsp; /vol/vol_nfs_backup_01/.snapshot&lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; &lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; vol options vol_nfs_backup_01&lt;/P&gt;&lt;P&gt;nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off, &lt;/P&gt;&lt;P&gt;ignore_inconsistent=off, snapmirrored=off, create_ucode=on, &lt;/P&gt;&lt;P&gt;convert_ucode=off, maxdirsize=9175, schedsnapname=ordinal, &lt;/P&gt;&lt;P&gt;fs_size_fixed=off, compression=off, guarantee=none, svo_enable=off, &lt;/P&gt;&lt;P&gt;svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off, &lt;/P&gt;&lt;P&gt;no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow, &lt;/P&gt;&lt;P&gt;read_realloc=off, snapshot_clone_dependency=off&lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; &lt;/P&gt;&lt;P&gt;CDSNAS02&amp;gt; lun show -v&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/vol_nfs_backup_01/lun02&amp;nbsp;&amp;nbsp;&amp;nbsp; 1.0t (1099604782080) (r/w, online, mapped)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Comment: "Lun Backup Veeam"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Serial#: P4BpFZ/Xv2KW&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Share: none&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Space Reservation: disabled&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Multiprotocol Type: windows&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Maps: Backup_Veeam_01=2&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; /vol/vol_vsphere_01/lun01&amp;nbsp;&amp;nbsp;&amp;nbsp; 1.0t (1099578736640) (r/w, online, mapped)&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Comment: "lun vsphere"&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Serial#: P4BpFZ/ReOly&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Share: none&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Space Reservation: disabled&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Multiprotocol Type: vmware&lt;/P&gt;&lt;P&gt;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; Maps: VSphere_ESX_01=2&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;I've deleted all file fro Windows. Now I have 96% occupied!!! How can reclaim this space?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thx a lot&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Francesco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 13:34:51 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36706#M3577</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-09T13:34:51Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36712#M3580</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;If you use SnapDrive, recent versions support space reclamation on Windows – i.e. unused space on NTFS file system is returned back to NetApp to free up space on volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Another possibility is deduplication, which could reduce physical space consumption.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 13:40:34 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36712#M3580</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2011-09-09T13:40:34Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36716#M3581</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Hi aborzenkov,&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;thank you for your response, but unfortunatly I don't have SnapDrive. It's a big problem that I cannot reclaim space in a simple manner.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 14:35:21 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36716#M3581</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-09T14:35:21Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36720#M3583</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt; If you are not using snapshots for the volume, you do not need to reclaim any space. If you delete the windows files, the space util will still be 96% but you can fill up the lun again if you are NOT using snapshots, no problem at all. Its just that the netapp machine itself cannot see which blocks are used or not, so once a LUN has been full, it will remain full. This is only a problem if taking snapshots, thats why you usualy need 2,3 times the space for a LUN using snapshots unless you are going for thin provisioning.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 14:43:54 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36720#M3583</guid>
      <dc:creator>thomas_glodde</dc:creator>
      <dc:date>2011-09-09T14:43:54Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36723#M3584</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Excuse me. If I delete the LUn and then recreate it, you think I'll have the same problem?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thanks&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Francesco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 14:44:53 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36723#M3584</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-09T14:44:53Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36728#M3585</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;You say that if I disable snapshot, I can write my 1Tb without problem even if NetApp says that ther's non space available? And the LUN goes offline?&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Francesco&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 14:47:49 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36728#M3585</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-09T14:47:49Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36732#M3586</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;you need to delete all snapshots, then your lun will still be full but you can write 1tb to it again. also make sure that the volume is at least a little bit bigger than the lun, eg 1tb lun so have a 1,1tb volume.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;if you delete and recreate, as soon as you make snapshots, the "problem" (it isnt a problem, its working as designed) will reoccur.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 14:51:34 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36732#M3586</guid>
      <dc:creator>thomas_glodde</dc:creator>
      <dc:date>2011-09-09T14:51:34Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36738#M3587</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;P&gt;Thank you. i delete all snapshot and try again.&lt;/P&gt;&lt;P&gt;&lt;/P&gt;&lt;P&gt;Thank you&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp;&amp;nbsp; &lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 14:57:47 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36738#M3587</guid>
      <dc:creator>MATRIX1970</dc:creator>
      <dc:date>2011-09-09T14:57:47Z</dc:date>
    </item>
    <item>
      <title>Re: Volume at 100%. Enable to connect.</title>
      <link>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36744#M3588</link>
      <description>&lt;HTML&gt;&lt;HEAD&gt;&lt;/HEAD&gt;&lt;BODY&gt;&lt;PRE __jive_macro_name="quote" class="jive_text_macro jive_macro_quote"&gt;&lt;P&gt;unless you are going for thin provisioning.&lt;/P&gt;&lt;/PRE&gt;&lt;P&gt;The problem here is exactly because OP &lt;STRONG&gt;does&lt;/STRONG&gt; use thin provisioning to the extreme (likely without realizing it). With traditional thick provisioning NetApp had long blocked snapshot creation thus preventing out of space condition.&lt;/P&gt;&lt;/BODY&gt;&lt;/HTML&gt;</description>
      <pubDate>Fri, 09 Sep 2011 15:39:30 GMT</pubDate>
      <guid>https://community.netapp.com/t5/VMware-Solutions-Discussions/Volume-at-100-Enable-to-connect/m-p/36744#M3588</guid>
      <dc:creator>aborzenkov</dc:creator>
      <dc:date>2011-09-09T15:39:30Z</dc:date>
    </item>
  </channel>
</rss>

