ONTAP Discussions

How to calculate max inodes?


Hello -


I recently had a volume reach its 31 million inode count. I increased that 31 million to 41 million using the files/maxfiles command, but now I'm bumping up against that 41M. I'm wondering how long I can do this without adversely affecting other volumes. I assume I can't just increase the max files indefinitely.


Is there a way to calculate the max number of files/inodes I can get from my current aggregate or that I can increase the current volume to?


Thanks in advance.


Re: How to calculate max inodes?

Minimal file size is 4K, that gives you theoretical maximum of number of files in a volume. File count is per flexible volume, it should not affect other volumes in the same aggregate.

Re: How to calculate max inodes?


You also needn't worry about how big it grows - NetApp controllers can handle a lot per volume.  In a 4 node NAS cluster I have 1 SVM that contains 4.8 billion files - the largest volume is at 175 million or so.  

The implication of a large file system in general is that if your users are going to put 100K files in a single directory, they'll notice a performance hit when they try to access that directory, but that is more due to general file system access issues rather than a property of Data OnTap itself.  So long as the volume in question has a good directory structure to separate out all those files, you shouldn't notice anything (other than the need to increase inode count from time to time).



View solution in original post

Re: How to calculate max inodes?


I would add a caveat that if you make use of qtree snapmirror or snapvault (on 7 mode) then high file counts can slow those processes down significantly. If you don't use those...then you should be fine. Keep in mind that you can use quotas to limit a user if you're think they're doing something they shouldn't be. We use that occasionally if someone's workflow isn't behaving'!



Earn Rewards for Your Review!
GPI Review Banner
All Community Forums