Network and Storage Protocols

iNode full ~ When does performance suffer

BrendonHiggins
9,258 Views

I have a very large CIFS volumes where iNode usage is 84%

ie df -i volume_name

Does this effect performance and if so at what level of iNode usage?

Thanks in advance

Brendon



1 ACCEPTED SOLUTION

z902129mf
9,258 Views

Hey Brendon, here's my previous explaination on inodes, you may know that already.

https://forums.netapp.com/message/5566#5566

More specifically on the performance hit, NetApp suggest that it is Memory bound and it will have a performance hit.

https://forums.netapp.com/message/5566#5566

But specifically what kind of hit will it be, it's hard to say. I did have this problem once, where a log volume was constantly being written to with tiny files. The server admins noticed the problem where the share performance was degrading much more toward the end and then it hits the maxfile limit. When I was troubleshooting the volume, it would take a long long time just to open the share.

View solution in original post

3 REPLIES 3

z902129mf
9,259 Views

Hey Brendon, here's my previous explaination on inodes, you may know that already.

https://forums.netapp.com/message/5566#5566

More specifically on the performance hit, NetApp suggest that it is Memory bound and it will have a performance hit.

https://forums.netapp.com/message/5566#5566

But specifically what kind of hit will it be, it's hard to say. I did have this problem once, where a log volume was constantly being written to with tiny files. The server admins noticed the problem where the share performance was degrading much more toward the end and then it hits the maxfile limit. When I was troubleshooting the volume, it would take a long long time just to open the share.

chriskranz
9,258 Views

2 things that will effect the performance here...

1) millions of files always perform worse than anything else, it's just simple physics with disks.

2) Running out of inodes shouldn't cause any performance impact. You may find that as you run out of space you get some sort of performance degradation (as the system needs to seek for free blocks longer). I would expect that the performance impact would be more likely because there are millions of files, and clients may have to enumerate these as they are browsing.

Unless you really need it, turn on "no_atime_update" on the volume (vol options volname no_atime_update on). This stops the filer modifying the access times of files, and can be quite a big performance boost with larger file counts. To be honest, I often do this with a lot of CIFS / NFS volumes anyway as I have yet to find a really good use for access times.

It also depends how busy your filer is now. If your filer is comfortably ticking away, you may not notice any impact. If your filer is already busy, you may notice it. But I would expect this to be more from having millions of files, and less because of a lack of inodes. The system wouldn't need to seek for free inodes as it would do for free blocks on a disk.

melton
9,258 Views

Brendon

You could alo check TR-3537, High File -Count Environment Best Practices. There is a ton of great information in there and the first section deals with the i-nodes.

Patrick

Public