VMware Solutions Discussions
VMware Solutions Discussions
Hi,
I'm new in this forum. I have a problem with a volume shared via iSCSI on my NetApp Data ONTAP Release 7.3.3. The message that appears on the web interface is this:
/vol/vol_nfs_backup_01 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).
The result is that when my Windows Server try to connecto to this LUN, via iSCSI, the LUn goes offline. Is there a way to free space via SSH (enabled) or in other manner?
Thank you a lot and excuse for my english.
Matrix1970
Solved! See The Solution
This is probably due to a protection policy that the NetApp uses, when a volume is at 100% full, a LUN has no guaranteed space to write new data to, so to avoid any potential corruption it offlines the LUN, which is why you are unable to connect to it.
You need to free up some space in the volume in order to mount this LUN again. Either delete some old snapshots, or grow the volume.
Volume can't be shared via iSCSI; only LUN (which is effectively file on a volume) can.
Please show output of commands on filer:
df -h vol_nfs_backup_01
df -r vol_nfs_backup_01
vol options vol_nfs_backup_01
lun show -v
Hi, today I have the same problem.
Here the output of the command you want:
df -h vol_nfs_backup_01
Filesystem total used avail capacity Mounted on
/vol/vol_nfs_backup_01/ 1024GB 1009GB 14GB 99% /vol/vol_nfs_backup_01/
/vol/vol_nfs_backup_01/.snapshot 0GB 37GB 0GB ---% /vol/vol_nfs_backup_01/.snapshot
CDSNAS02>
CDSNAS02> df -r vol_nfs_backup_01
Filesystem kbytes used avail reserved Mounted on
/vol/vol_nfs_backup_01/ 1073741824 1058228508 15513316 0 /vol/vol_nfs_backup_01/
/vol/vol_nfs_backup_01/.snapshot 0 39793772 0 0 /vol/vol_nfs_backup_01/.snapshot
CDSNAS02>
CDSNAS02> vol options vol_nfs_backup_01
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=off, maxdirsize=9175, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=none, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off
CDSNAS02>
CDSNAS02> lun show -v
/vol/vol_nfs_backup_01/lun02 1.0t (1099604782080) (r/w, online, mapped)
Comment: "Lun Backup Veeam"
Serial#: P4BpFZ/Xv2KW
Share: none
Space Reservation: disabled
Multiprotocol Type: windows
Maps: Backup_Veeam_01=2
/vol/vol_vsphere_01/lun01 1.0t (1099578736640) (r/w, online, mapped)
Comment: "lun vsphere"
Serial#: P4BpFZ/ReOly
Share: none
Space Reservation: disabled
Multiprotocol Type: vmware
Maps: VSphere_ESX_01=2
Any idea?
Well … you turned off space reservation everywhere and filled volume to the limits which means, there is nothing Data ONTAP can do to protect you from running out of space. You are solely responsible for monitoring available space and taking steps when it becomes low.
Please read TR-3483 which explains in details how space for LUN is managed on NetApp. In short, you must ensure that sum of LUN size and possible snapshot size during retention period does not exceed volume size. It does in your case.
You have to decide what is more important for you – squeezing last byte out of NetApp or ensuring continues data availability. Personally I prefer the latter ☺.
How full is file system in Windows? If there appears much free space you could try to run space reclamation on Windows, but there were some bugs resulting in data corruption, so I’d open support case to verify that you do run into it.
For now the only way is to remove more snapshots, but you probably need to increase volume size anyway.
In Windows I see 325GB as free space. But NetApp show me that all is full (97%). I have deleted some snapshot, the LUN goes online, but after some write, it returns in offline state.
Under the Filer Report I see now (when the situation is "normal"):
Filesystem kbytes used avail capacity Mounted on
/vol/vol0/ 31457280 1139556 30317724 4% /vol/vol0/
/vol/vol0/.snapshot 0 168608 0 ---% /vol/vol0/.snapshot
/vol/vol_vsphere_01/ 1073741824 510370232 563371592 48% /vol/vol_vsphere_01/
/vol/vol_vsphere_01/.snapshot 0 43979116 0 ---% /vol/vol_vsphere_01/.snapshot
/vol/vol_nfs_backup_01/ 1073741824 1042022800 31719024 97% /vol/vol_nfs_backup_01/
/vol/vol_nfs_backup_01/.snapshot 0 23588044 0 ---% /vol/vol_nfs_backup_01/.snapshot
Isn't possible to normalize this situation? This 97% where's?
Thank you
Francesco
This is probably due to a protection policy that the NetApp uses, when a volume is at 100% full, a LUN has no guaranteed space to write new data to, so to avoid any potential corruption it offlines the LUN, which is why you are unable to connect to it.
You need to free up some space in the volume in order to mount this LUN again. Either delete some old snapshots, or grow the volume.
Hi,
thanks a lot for your help. I've deleted some snapshots and now it's all OK.
Matrix1970
Hi,
today I have the same problem. The error is the same: /vol/vol_nfs_backup_01 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).
If I try to connect to iSCSI LUN, the free space is 325 GB, but when I try to run a backup job from Veeam, after some seconds the LUN goes offline. How can resolve this problem?
Please help me.
Thanks
Francesco
Please show output of commands I have asked you before. It is impossible to help without knowing what’s going on.
Hi,
here's the output
df -h vol_nfs_backup_01
Filesystem total used avail capacity Mounted on
/vol/vol_nfs_backup_01/ 1024GB 1009GB 14GB 99% /vol/vol_nfs_backup_01/
/vol/vol_nfs_backup_01/.snapshot 0GB 37GB 0GB ---% /vol/vol_nfs_backup_01/.snapshot
CDSNAS02>
CDSNAS02> df -r vol_nfs_backup_01
Filesystem kbytes used avail reserved Mounted on
/vol/vol_nfs_backup_01/ 1073741824 1058228508 15513316 0 /vol/vol_nfs_backup_01/
/vol/vol_nfs_backup_01/.snapshot 0 39793772 0 0 /vol/vol_nfs_backup_01/.snapshot
CDSNAS02>
CDSNAS02> vol options vol_nfs_backup_01
nosnap=off, nosnapdir=off, minra=off, no_atime_update=off, nvfail=off,
ignore_inconsistent=off, snapmirrored=off, create_ucode=on,
convert_ucode=off, maxdirsize=9175, schedsnapname=ordinal,
fs_size_fixed=off, compression=off, guarantee=none, svo_enable=off,
svo_checksum=off, svo_allow_rman=off, svo_reject_errors=off,
no_i2p=off, fractional_reserve=0, extent=off, try_first=volume_grow,
read_realloc=off, snapshot_clone_dependency=off
CDSNAS02>
CDSNAS02> lun show -v
/vol/vol_nfs_backup_01/lun02 1.0t (1099604782080) (r/w, online, mapped)
Comment: "Lun Backup Veeam"
Serial#: P4BpFZ/Xv2KW
Share: none
Space Reservation: disabled
Multiprotocol Type: windows
Maps: Backup_Veeam_01=2
/vol/vol_vsphere_01/lun01 1.0t (1099578736640) (r/w, online, mapped)
Comment: "lun vsphere"
Serial#: P4BpFZ/ReOly
Share: none
Space Reservation: disabled
Multiprotocol Type: vmware
Maps: VSphere_ESX_01=2
I've deleted all file fro Windows. Now I have 96% occupied!!! How can reclaim this space?
Thx a lot
Francesco
If you use SnapDrive, recent versions support space reclamation on Windows – i.e. unused space on NTFS file system is returned back to NetApp to free up space on volume.
Another possibility is deduplication, which could reduce physical space consumption.
Hi aborzenkov,
thank you for your response, but unfortunatly I don't have SnapDrive. It's a big problem that I cannot reclaim space in a simple manner.
If you are not using snapshots for the volume, you do not need to reclaim any space. If you delete the windows files, the space util will still be 96% but you can fill up the lun again if you are NOT using snapshots, no problem at all. Its just that the netapp machine itself cannot see which blocks are used or not, so once a LUN has been full, it will remain full. This is only a problem if taking snapshots, thats why you usualy need 2,3 times the space for a LUN using snapshots unless you are going for thin provisioning.
You say that if I disable snapshot, I can write my 1Tb without problem even if NetApp says that ther's non space available? And the LUN goes offline?
Thank you
Francesco
you need to delete all snapshots, then your lun will still be full but you can write 1tb to it again. also make sure that the volume is at least a little bit bigger than the lun, eg 1tb lun so have a 1,1tb volume.
if you delete and recreate, as soon as you make snapshots, the "problem" (it isnt a problem, its working as designed) will reoccur.
Thank you. i delete all snapshot and try again.
Thank you
unless you are going for thin provisioning.
The problem here is exactly because OP does use thin provisioning to the extreme (likely without realizing it). With traditional thick provisioning NetApp had long blocked snapshot creation thus preventing out of space condition.
Excuse me. If I delete the LUn and then recreate it, you think I'll have the same problem?
Thanks
Francesco