Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I have the error message:
Filer: Version: Status: | FAS1 Data ONTAP Release 7.2.5.1 /vol/ExchData is full (using or reserving 100% of space and 0% of inodes, using 57% of reserve). |
That is not correctly!
On the volume is a LUN, only reserved 80GB. On the LUN is a Exchange data base. Snapshots take together only around the 10GB. Thus the 140GB cannot be full yet. Here a small overview of the Konfiguartion:
Aggregate total used avail capacity
aggr3 908GB 845GB 62GB 93%
aggr3/.snapshot 47GB 11GB 36GB 24%
aggr3 908GB 845GB 62GB 93%
aggr3/.snapshot 47GB 11GB 36GB 24%
Filesystem total used avail capacity Mounted on
/vol/ExchData/ 140GB 140GB 0GB 100% /vol/ExchData/
/vol/ExchData/.snapshot 60GB 5GB 54GB 10% /vol/ExchData/.snapshot
/vol/ExchData/ 140GB 140GB 0GB 100% /vol/ExchData/
/vol/ExchData/.snapshot 60GB 5GB 54GB 10% /vol/ExchData/.snapshot
Filesystem kbytes used avail reserved Mounted on
/vol/ExchData/ 146800640 146800640 0 64863884 /vol/ExchData/
/vol/ExchData/.snapshot 62914560 5990628 56923932 0 /vol/ExchData/.snapshot
/vol/ExchData/ 146800640 146800640 0 64863884 /vol/ExchData/
/vol/ExchData/.snapshot 62914560 5990628 56923932 0 /vol/ExchData/.snapshot
Name: | ExData | |||||
Type: | Flexible | Root Volume? | - | |||
FlexClone ? | - | Containing Aggregate: | aggr0 | |||
Status: | online,raid4 | |||||
Used Capacity: | 140 GB | Space Guarantee | volume | |||
% Used: | 100% | Language: | de | |||
Total Capacity: | 140 GB | Total Size: | 200 GB | |||
Number of Files: | 109 | Max Directory Size: | 8.96 MB | |||
Max Files: | 6.92 m | |||||
SNAP Mirror? | - | SNAP Directory? | enable | |||
SNAP? | - | Resync SNAP Time: | 60 | |||
SVO Enable? | - | SVO Checksum? | - | |||
Allow SVO RMAN? | - | SVO Reject Errors? | - | |||
Create Unicode? | enable | Convert Unicode? | enable | |||
Minimal Read Ahead? | - | NV Fail? | - | |||
Fractional Reserve : | 100 | Extent? | - | |||
FS Size Fixed? | - | Update Access time? | enable | |||
I2P? | enable | Ignore Inconsistent?? | - |
I hope you can me help, since I can make no Snapshots because of the error momentarily.
Thanks!!!!
18 REPLIES 18
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Welcome to the forum and it is a common question.
I think you have snap reserve set to 30% on your volume. Do can see this on the filer from cli with the command df -r -h ExchData.
Have a look at this document for a description of what is going on with LUNs and volumes.
http://media.netapp.com/documents/tr-3483.pdf
You have another problem in that your aggregate is 93% full and I think this will be performance issue sone, if it is not already an issue. I do not like to run above 85% on an aggregate.
Bren
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
sorry forgot
snap reserve ExchData
will show the percentage
you can claim space bad with
snap reserve ExchData 20
This will set it to 20% and return space to the volume. As you have only used 10% of your reserve you will be OK to do this now and get snapshot working again.
Bren
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
why should the performance sink starting from 85%?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The system will be hunting for free blocks from the white list and will
not be able to stripe the write against the full raid group. You can
see this if you run a statit report. Look in the RAID section for
blocks written against the raid group size.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Because of the Snapshot mirror, can it be? Which it always actually needs the double used from which it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Yes Brendon is right and just to expand.
One of the great things that netapp does for you in terms of performance is when data is sent to it, the filer will hold it in cache until there is enough to do a full stripe write. This means it stripes the blocks of data across your raid group. As you can imagine this is much faster to read and write. The problem when you aggr becomes too full as yours is, is that there is not enough free space within the aggr to perform a full stripe write and therefore performance will really suffer.
Hope that makes sense
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How big is the LUN in the volume?
It would seem to me that if you have ~60g reserved for the LUN, then your LUN is bigger than that (you mentioned 80g about something), so this would make perfect sense. Don't forget that as part of the snapshot process of a LUN, the filer will reserve 100% of the used space in the LUN for overwrites. So if you have written 60g to a 80g LUN, and you take a snapshot, you will be using 140g + whatever snapshot sizes, which is exactly the figures you have supplied.
I wrote a blog article that might help you understand this concept - http://communities.netapp.com/groups/chris-kranz-hardware-pro/blog/2009/03/05/fractional-reservation--lun-overwrite
As a side note, Brendon pointed out that you are reserving 30% for snapshots in this volume, but I am guessing from the volume name this is an Exchange LUN. If you are using SnapManager for Exchange and SnapDrive, it is recommended you reduce the snapshot reserve to 0% and use SnapDrive and SME to manage this for you.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I cannot see your link, I've not understood why the occupied space is doubled..
Please share the link
Thank you
Mattia
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Very similar situation:
vol create vol_orauivv_oradata1 aggr1 5120g
snap reserve vol_orauivv_oradata1 0
vol options vol_orauivv_oradata1 nosnap on
lun create -s 5099g -t linux /vol/vol_orauivv_oradata1/lun_orauivv_oradata1
After a wile it gets offline.
/vol/vol_orauivv_oradata1/lun_orauivv_oradata1 5.0t (5475009560576) (r/w, offline, mapped)
Sun Oct 27 00:43:00 EEST [netapp-s1:monitor.globalStatus.nonCritical:warning]: /vol/vol_oraoper_oradata2 is full (using or reserving 98% of space and 0% of inodes, using 98% of reserve). /vol/vol_oraoper_oraarc is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve). /vol/vol_orauivv_oradata1 is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve). /vol/vol_orauivv_orafb is full (using or reserving 100% of space and 0% of inodes, using 100% of reserve).
Sun Oct 27 01:00:00 EEST [netapp-s1:kern.uptime.filer:info]: 1:00am up 50 days, 9:28 0 NFS ops, 0 CIFS ops, 0 HTTP ops, 523927230 FCP ops, 0 iSCSI ops
What it can be?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a) Snapshots
b) Intensive overwrite where it does not have time to free overwritten blocks fast enough.
c) Thin provisioned volumes so there is not enough space on aggregate
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a) vol options vol_orauivv_oradata1 nosnap on
b) how to prevent this?
c) should I turn on snap reserv?
d) Can it be because of low quontity of inodes?
e) Can the deduplication make the impact?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
a) vol options vol_orauivv_oradata1 nosnap on
It does not mean there are no snapshots; it just stops Data ONTAP from creating scheduled snapshots.
b) how to prevent this?
You need to find out what is "this" first.
c) should I turn on snap reserv?
Makes no difference for LUN. You can also just create smaller LUN.
d) Can it be because of low quontity of inodes?
No.
e) Can the deduplication make the impact?
Yes. It seems to create snapshots internally.
Whatever you do, filling volume to 99,99% is asking for troubles.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
>You need to find out what is "this" first.
>>b) how to prevent this?
>>>b) Intensive overwrite where it does not have time to free overwritten blocks fast enough.
Do I need just to reserve free space to prevent the Intensive overwrite? How much should I reserve, is it enough 2% on Vol (Best Practice) ?
In case of no snapshots were taken manually, moust propable situation is dedup, isn't it?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
The only way to prevent this is to leave enough headroom (free space) for your workload.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
How much should I reserve, is it enough 4% on Vol (Best Practice) ?
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Best practice is 20%.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Can you please provide the name of the document, thanks!
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I found only this:
215-07979_A0 Data ONTAP® 8.2 Storage Efficiency Management Guide For 7-Mode on a page 26
You can find there is 4% overhead:on volume.
"In a volume, deduplication metadata can occupy up to 4 percent of the total amount of data contained within the volume."
