I gues the volume got formatted at windows end so that is why it showing me 65gb used. Please correct me if wrong .
I have set volume option Fractional Reserve to 0 and have the snapshot enabled . Also the volume is getting snapmirror to DR filer.
1) When user deletes the few gb's of data and add fresh data the all deleted data goes into snapshot and volume gets full and it takes lun offline.
Wed Apr 24 11:33:03 IST [filer0011: scsitarget.lun.noSpace:error]: LUN '/vol/exch_index_1/exch_index_1' has run out of space. Wed Apr 24 11:33:03 IST [filer0011: lun.offline:warning]: LUN /vol/exch_index_1/exch_index_1 has been taken offline
2) To avoid this I have disabled the snapshot generation and break the snapmirror relation ship with DR volume . Now again when use starts dumping the fresh data it started eating up the snapshot space (snapmirror snapshot started growing ) and once vailable space in volume is eaten up by snapshot the volume gets full and LUN goes offline again.
Below is volume status when only snapmirror snapshot was there and user was dumping fresh data.
you need to look at the "snapshot reserve" for the volume level, from looking above if you take your 96gb volume and then you put a thick provisioned 65gb LUN within that volume. this leaves you 31gb free space in the Volume. this will be used by the snapshots if no reserve is set. from above i can see that you have one snapshot of 15gb which would total 80gb. However check the snapshot reserve and let us know what that is, as that percentage will take the space from the volume total size.
fractional_reserve is already set to 0 and snap reserve is 20%. Some one has recommened me that only setting up the fractional_reserve to 0 will not help i need to set volume guarantee=none. I have set that.
New data doesn't 'go into' the snapshot rather than the 'actual volume'.
The volume is a space which contains blocks. In your case, you have a LUN in there, which contains data presented to a server via iSCSI. Since you have snapshots enabled on this volume, it will periodically snapshot (stop changing blocks which already exist on the volume) the volume, and record any changed blocks in a new area. This isn't literally how it works, but you can think of it like that. This means that any changed blocks count towards the snapshot space, not just deleted blocks. So when your user deletes 20GB of data, then writes 20GB of new data, this creates a 40GB snapshot, not a 20GB snapshot.
Your volume offlines to protect itself, as it has nowhere to write new data.
To prevent this, you have a few options:
Stop creating snapshots (no good if you want to have snapshots)
Increase the overall volume size
Keep fewer snapshots (you can either change the retention schedule, or configure snapshot auto deletion)
Decrease the rate of change in the volume (no good if you want to keep using your volume normally).
Please do a 'snap list' to check if there are any older snapshots consuming space inside your volume.
Also, what is your snapshot schedule?
volume guarantee=none only thin provisions your volume and has no effect on its configured size. It will use less space in the aggregate (maybe), but won't actually be any larger/smaller as far as your LUN is concerned.
I would suggest:
Check the snapshot schedule isn't keeping too many snaps.
Work out how much snapshot space you'll need. Do this by calculating the size of your snapshots multiplied by the number you want to retain. Eg, if your snapshots are 15GB and you want to keep 6 of them, you'll need about 90GB of space to do this.
Then you'll have to size your volume to make sure that you have enough space to contain the LUN, the total snapshot size you want to keep, and then some free space.
The Snapshot reserve is an area for use by snapshots only, however they will use space outside of this reserve as required. Given your LUN can't grow by itself, you shouldn't need to configure this option here, and you can set it later once you know how large your snaps will be. The setting isn't necessary for LUN volumes.
You can enable quotas on the volume/qtree, just don't set limits. There's really no point. The LUN size limits LUN data. A quota won't limit any other per user data because this isn't a CIFS or NFS exported filesystem.
You really need to read the documentation on SAN/block data administration.
The issue isn't that the 'customers dump data' to the LUN, it's that the blocks in the LUN change and get moved from the active file system to the snapshot that exists due to the SnapMirror relationship. This is normal behavoir and reflects the data change rate that is going on in the LUN. There are a couple of options availble that can help to mitigate this issue:
1) Volume autogrow - grows the underslying FlexVol when a certain usage % threshold is reached, threshold is higher the larger the FlexVol size.
2) Snap autodelete - deletes snapshots on an as needed basis using a variety of trigger conditions, thresholds and delete options ('snap autodelete [volname]' for more options). The issue here is that the SnapMirror snapshot is locked so this will not work as long as the SnapMirror relationship exists.
3) Thin provisioning - Set the FlexVol to some very large value. ( I typically will set my FlexVols to twice the size of the configured LUNs, I set fractional reserve to 0). As this is thin provisioned, I have not hard allocated the space in the aggregate. Now, the danger f thin provisioning is that you are promising space that either a) you hope never gets fully used or b) you can procure and/or free up capacity to add to the aggregate at a later date.
4) Deduplication - removes duplicate blocks in the FlexVol and updates the inode table to point to the single unique data block. This saves space and on average will yield about a 20-25% reduction in data on the FlexVol. Some data like large image files, encrypted data, etc don't deupe much if at all. Some data like virtual machine OS disks depupe very, very well.