Options
- Subscribe to RSS Feed
- Mark Topic as New
- Mark Topic as Read
- Float this Topic for Current User
- Bookmark
- Subscribe
- Mute
- Printer Friendly Page
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Friends,
We have a CIFS volume , where the inode is getting filled frequently.
It is used for scripting ( numerous no of files present in it), we normally increase the maxfile by 3 to 5 % . This seems to be temporary solution , is there any permanent fix for this?
Gone through some KB's and suggestions it is represented to recreate the volume by deleting all the data's in it . However i am looking for alternate solution rather this .
Thanks,
Saran
10 REPLIES 10
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hello Saran,
Please try extending the volume if possible, It'll solve your problem.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
volume have enough space on it , apart that do we need to increase it.
Saran
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi Saran,
You may wish to try:
Filer> vol options <Volname>
You will see one of the vol option maxdirsize=xxxxxx. I'll suggest to increase that maxdirsize for about 10% due to too high of the max dir size could hurt the filer's performance.
Then run:
Filer> vol options <Volname> maxdirsize=<current max dir size number + 10% increase>
Filer> vol options <Volname>
To verify the change.
Good luck
Henry
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Hi,
Looking for some permanent fix, not to increase the max dir size
Saran
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Your permanent fix is to stop writing new files to the volume. Every time you write a file, you use up at least one inode. There is a finite number of inodes available in a volume - this can be increased with the maxdirsize option, but all you're doing there is resetting that finite number. As HenryPan2 points out, there are performance implications for increasing this value (though in truth I've modified it many times and never seen an impact).
So if you truly want a permanent fix, you need to remove data from the volume. Archive/delete some old stuff. Migrate to a new volume. But if you continue to write files in this volume, you will continue to use inodes, and you will continue to run into this problem.
Hope that helps
Bill
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
am seeing the number of inodes size has literally increased above the size of the volume.
Current volume size is 250 gb, but the max size is 16050000. ( it is 32 bit aggr)
Saran
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Nitpicking - maxdirsize is irrelevant here. This is maxfiles option.
Отправлено с iPhone
20 дек. 2013 г., в 20:22, "billshaffer" <xdl-communities@communities.netapp.com<mailto:xdl-communities@communities.netapp.com>> написал(а):
<https://communities.netapp.com/index.jspa>
Re: Inode getting filled
created by billshaffer<https://communities.netapp.com/people/billshaffer> in Data ONTAP - View the full discussion<https://communities.netapp.com/message/122776#122776>
Your permanent fix is to stop writing new files to the volume. Every time you write a file, you use up at least one inode. There is a finite number of inodes available in a volume - this can be increased with the maxdirsize option, but all you're doing there is resetting that finite number. As HenryPan2 points out, there are performance implications for increasing this value (though in truth I've modified it many times and never seen an impact).
So if you truly want a permanent fix, you need to remove data from the volume. Archive/delete some old stuff. Migrate to a new volume. But if you continue to write files in this volume, you will continue to use inodes, and you will continue to run into this problem.
Hope that helps
Bill
Reply to this message by replying to this email -or- go to the message on NetApp Community<https://communities.netapp.com/message/122776#122776>
Start a new discussion in Data ONTAP by email<mailto:discussions-community-products_and_solutions-data_ontap@communities.netapp.com> or at NetApp Community<https://communities.netapp.com/choose-container.jspa?contentType=1&containerType=14&container=2877>
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
aborzenkov is correct (thanks) - it is the maxfiles vol options, not the maxdirdize.
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
Is there any solution to do with no or less disruption ?
Saran
- Mark as New
- Bookmark
- Subscribe
- Mute
- Subscribe to RSS Feed
- Permalink
- Report Inappropriate Content
if you can extend the volume, it'll increase the inodes. i.e. if you double the size of volume, inodes will be doubled. You can verify it by creating a test volume.
