FAS1 Data ONTAP Release 184.108.40.206 /vol/ExchData is full (using or reserving 100% of space and 0% of inodes, using 57% of reserve).
That is not correctly!
On the volume is a LUN, only reserved 80GB. On the LUN is a Exchange data base. Snapshots take together only around the 10GB. Thus the 140GB cannot be full yet. Here a small overview of the Konfiguartion:
Aggregate total used avail capacity aggr3 908GB 845GB 62GB 93% aggr3/.snapshot 47GB 11GB 36GB 24%
Filesystem total used avail capacity Mounted on /vol/ExchData/ 140GB 140GB 0GB 100% /vol/ExchData/ /vol/ExchData/.snapshot 60GB 5GB 54GB 10% /vol/ExchData/.snapshot
Filesystem kbytes used avail reserved Mounted on /vol/ExchData/ 146800640 146800640 0 64863884 /vol/ExchData/ /vol/ExchData/.snapshot 62914560 5990628 56923932 0 /vol/ExchData/.snapshot
Number of Files:
Max Directory Size:
Resync SNAP Time:
Allow SVO RMAN?
SVO Reject Errors?
Minimal Read Ahead?
Fractional Reserve :
FS Size Fixed?
Update Access time?
I hope you can me help, since I can make no Snapshots because of the error momentarily.
One of the great things that netapp does for you in terms of performance is when data is sent to it, the filer will hold it in cache until there is enough to do a full stripe write. This means it stripes the blocks of data across your raid group. As you can imagine this is much faster to read and write. The problem when you aggr becomes too full as yours is, is that there is not enough free space within the aggr to perform a full stripe write and therefore performance will really suffer.
It would seem to me that if you have ~60g reserved for the LUN, then your LUN is bigger than that (you mentioned 80g about something), so this would make perfect sense. Don't forget that as part of the snapshot process of a LUN, the filer will reserve 100% of the used space in the LUN for overwrites. So if you have written 60g to a 80g LUN, and you take a snapshot, you will be using 140g + whatever snapshot sizes, which is exactly the figures you have supplied.
As a side note, Brendon pointed out that you are reserving 30% for snapshots in this volume, but I am guessing from the volume name this is an Exchange LUN. If you are using SnapManager for Exchange and SnapDrive, it is recommended you reduce the snapshot reserve to 0% and use SnapDrive and SME to manage this for you.
Sun Oct 27 00:43:00 EEST [netapp-s1:monitor.globalStatus.nonCritical:warning]: /vol/vol_oraoper_oradata2 is full (using or reserving 98% of space and 0% of inodes, using 98% of reserve). /vol/vol_oraoper_oraarc is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve). /vol/vol_orauivv_oradata1 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve). /vol/vol_orauivv_orafb is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).
Sun Oct 27 01:00:00 EEST [netapp-s1:kern.uptime.filer:info]: 1:00am up 50 days, 9:28 0 NFS ops, 0 CIFS ops, 0 HTTP ops, 523927230 FCP ops, 0 iSCSI ops