Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello -
I recently had a volume reach its 31 million inode count. I increased that 31 million to 41 million using the files/maxfiles command, but now I'm bumping up against that 41M. I'm wondering how long I can do this without adversely affecting other volumes. I assume I can't just increase the max files indefinitely.
Is there a way to calculate the max number of files/inodes I can get from my current aggregate or that I can increase the current volume to?
Thanks in advance.
Solved! See The Solution
1 ACCEPTED SOLUTION
hanover23 has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You also needn't worry about how big it grows - NetApp controllers can handle a lot per volume. In a 4 node NAS cluster I have 1 SVM that contains 4.8 billion files - the largest volume is at 175 million or so.
The implication of a large file system in general is that if your users are going to put 100K files in a single directory, they'll notice a performance hit when they try to access that directory, but that is more due to general file system access issues rather than a property of Data OnTap itself. So long as the volume in question has a good directory structure to separate out all those files, you shouldn't notice anything (other than the need to increase inode count from time to time).
Bob
3 REPLIES 3
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Minimal file size is 4K, that gives you theoretical maximum of number of files in a volume. File count is per flexible volume, it should not affect other volumes in the same aggregate.
hanover23 has accepted the solution
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
You also needn't worry about how big it grows - NetApp controllers can handle a lot per volume. In a 4 node NAS cluster I have 1 SVM that contains 4.8 billion files - the largest volume is at 175 million or so.
The implication of a large file system in general is that if your users are going to put 100K files in a single directory, they'll notice a performance hit when they try to access that directory, but that is more due to general file system access issues rather than a property of Data OnTap itself. So long as the volume in question has a good directory structure to separate out all those files, you shouldn't notice anything (other than the need to increase inode count from time to time).
Bob
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
I would add a caveat that if you make use of qtree snapmirror or snapvault (on 7 mode) then high file counts can slow those processes down significantly. If you don't use those...then you should be fine. Keep in mind that you can use quotas to limit a user if you're think they're doing something they shouldn't be. We use that occasionally if someone's workflow isn't behaving'!
--rdp
