2017-03-08 11:00 AM
I have a Netapp cluster hosting more than a trillion files. Some volumes keep reaching the threadsholds of max # of inodes. Is there any downside if I set the max # of inodes to a much larger number?
2017-03-08 11:25 AM
The implication of increasing the inode count is the growth of the inode file for the volume.
This is also an impact on memory as the inode file is cached.
Increase the inode count with care.
Although you can shrink the inode count in modern ONTAP it's my understanding it still won't shrink the inode file.
There's an old doc TR-3537 that discusses high file count environs.
It's currently marked as 'Confidential'.
If you're under NDA you may be able to get a copy of it.
I hope this response has been helpful to you.
At your service,
Eugene E. Kashpureff, Sr.
Independent NetApp Consultant http://www.linkedin.com/in/eugenekashpureff
Senior NetApp Instructor, FastLane US http://www.fastlaneus.com/
(P.S. I appreciate 'kudos' on any helpful posts.)
2017-03-08 03:21 PM
Thanks for the pointers. Unfortunately I don't have access to that TR but will see if I can get a copy.
Searching on the TR number, I've found http://community.netapp.com/t5/Data-ONTAP-Discussions/Self-organisation-of-many-files-at-one-location/td-p/23028, it mentions:
If you can't get access to the TR a couple of best practice tips:
o) Keep the number of files per directory <10,000 – and much less (<1,000 is better) if possible.
o) Keep the subdirectory depth less than 5.
Unfortunately it's not practical in our EDA environment where we don't have much control on the directory structures and having millions of files in a volume is common.