2009-03-17 06:38 AM
Data ONTAP Release 220.127.116.11
/vol/ExchData is full (using or reserving 100% of space and 0% of inodes, using 57% of reserve).
|FlexClone ?||-||Containing Aggregate:||aggr0|
|Used Capacity:||140 GB||Space Guarantee||volume|
|Total Capacity:||140 GB||Total Size:||200 GB|
|Number of Files:||109||Max Directory Size:||8.96 MB|
|Max Files:||6.92 m|
|SNAP Mirror?||-||SNAP Directory?||enable|
|SNAP?||-||Resync SNAP Time:||60|
|SVO Enable?||-||SVO Checksum?||-|
|Allow SVO RMAN?||-||SVO Reject Errors?||-|
|Create Unicode?||enable||Convert Unicode?||enable|
|Minimal Read Ahead?||-||NV Fail?||-|
|Fractional Reserve :||100||Extent?||-|
|FS Size Fixed?||-||Update Access time?||enable|
2009-03-17 07:27 AM
Welcome to the forum and it is a common question.
I think you have snap reserve set to 30% on your volume. Do can see this on the filer from cli with the command df -r -h ExchData.
Have a look at this document for a description of what is going on with LUNs and volumes.
You have another problem in that your aggregate is 93% full and I think this will be performance issue sone, if it is not already an issue. I do not like to run above 85% on an aggregate.
2009-03-17 07:30 AM
snap reserve ExchData
will show the percentage
you can claim space bad with
snap reserve ExchData 20
This will set it to 20% and return space to the volume. As you have only used 10% of your reserve you will be OK to do this now and get snapshot working again.
2009-03-17 09:48 AM
The system will be hunting for free blocks from the white list and will
not be able to stripe the write against the full raid group. You can
see this if you run a statit report. Look in the RAID section for
blocks written against the raid group size.
2009-03-18 01:29 AM
Yes Brendon is right and just to expand.
One of the great things that netapp does for you in terms of performance is when data is sent to it, the filer will hold it in cache until there is enough to do a full stripe write. This means it stripes the blocks of data across your raid group. As you can imagine this is much faster to read and write. The problem when you aggr becomes too full as yours is, is that there is not enough free space within the aggr to perform a full stripe write and therefore performance will really suffer.
Hope that makes sense
2009-03-18 07:41 AM
How big is the LUN in the volume?
It would seem to me that if you have ~60g reserved for the LUN, then your LUN is bigger than that (you mentioned 80g about something), so this would make perfect sense. Don't forget that as part of the snapshot process of a LUN, the filer will reserve 100% of the used space in the LUN for overwrites. So if you have written 60g to a 80g LUN, and you take a snapshot, you will be using 140g + whatever snapshot sizes, which is exactly the figures you have supplied.
I wrote a blog article that might help you understand this concept - http://communities.netapp.com/groups/chris-kranz-hardware-pro/blog/2009/03/05/fractional-reservation--lun-overwrite
As a side note, Brendon pointed out that you are reserving 30% for snapshots in this volume, but I am guessing from the volume name this is an Exchange LUN. If you are using SnapManager for Exchange and SnapDrive, it is recommended you reduce the snapshot reserve to 0% and use SnapDrive and SME to manage this for you.
2013-10-28 05:19 AM
Very similar situation:
vol create vol_orauivv_oradata1 aggr1 5120g
snap reserve vol_orauivv_oradata1 0
vol options vol_orauivv_oradata1 nosnap on
lun create -s 5099g -t linux /vol/vol_orauivv_oradata1/lun_orauivv_oradata1
After a wile it gets offline.
/vol/vol_orauivv_oradata1/lun_orauivv_oradata1 5.0t (5475009560576) (r/w, offline, mapped)
Sun Oct 27 00:43:00 EEST [netapp-s1:monitor.globalStatus.nonCritical:warning]: /vol/vol_oraoper_oradata2 is full (using or reserving 98% of space and 0% of inodes, using 98% of reserve). /vol/vol_oraoper_oraarc is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve). /vol/vol_orauivv_oradata1 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve). /vol/vol_orauivv_orafb is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).
Sun Oct 27 01:00:00 EEST [netapp-s1:kern.uptime.filer:info]: 1:00am up 50 days, 9:28 0 NFS ops, 0 CIFS ops, 0 HTTP ops, 523927230 FCP ops, 0 iSCSI ops
What it can be?
2013-10-28 05:29 AM
b) Intensive overwrite where it does not have time to free overwritten blocks fast enough.
c) Thin provisioned volumes so there is not enough space on aggregate