ONTAP Discussions

Number of inodes

rozle_palcar
6,520 Views

Hello,

One of our customers is using FAS2240 for backup over NFS. They are web hosting provider, so they are operating with large number of files and they already used all inodes on 24TB volume at 43% of used capacity.

I already found information that by default each inode stands for 32k on filesystem and that maximum number of inodes is limited with size of block on WAFL (4k). In theory we can increase number of inodes for 8x (32k / 4k), but since this setting can not be reverted back to smaller number I would like to have second opinion how this would impact performance. From information I found on cummunities performance will be worse for sure, but how much and at which operations. I guess biggest impact would be at dedup, compresion, reallocate and similar operations which are scanning inodes, but does this also degrades read/write performance for this volume?

Here is output of df -i and df from problematic filer:

Netapp-Backup2> df -i

Filesystem iused ifree  %iused  Mounted on

/vol/root/ 9435 6216482      0%  /vol/root/

/vol/Backup2/ 31876686          3 100%  /vol/Backup2/


Netapp-Backup2> df

Filesystem kbytes used      avail capacity  Mounted on

/vol/root/ 199229440    5760464 193468976       3%  /vol/root/

/vol/root/.snapshot 10485760     244316   10241444       2% /vol/root/.snapshot

/vol/Backup2/ 25769803776 10914604052 14855199724 42%  /vol/Backup2/

/vol/Backup2/.snapshot 0          0 0     ---%  /vol/Backup2/.snapshot

Thank you!

Regards,

Rozle

3 REPLIES 3

DOUGLASSIGGINS
6,520 Views

We run a very similar environment with large maxfiles setting. We typically use 150M in a 16T aggr. I am sure there is some impact. In fact we've made mistypes and increased it far above what is needed. This is how our environment is -- so we really don't have a choice. I haven't noticed a huge impact because of this. With just about any system containing huge numbers of files snapmirror tends to be a dog, and any files based copies are much slower than single large files

(EDIT)

Bill is correct, I was typing up a response then left the screen and came back. I was looking for examples where lots of tiny files will hurt. You use snapmirror (non qtree) specifically because its faster than something files based.

billshaffer
6,520 Views

For what it's worth, I've increased  maxfiles on many volumes in many environments, and never really noticed a performance hit.

I have to disagree with Doug when he says that snapmirror is slow with a large number of files, since snapmirror is block based and not file based - but I DO agree with him when he says you don't really have a choice.  If you've run out of inodes, you either need to increase the inode count, or redesign the app or whatever is using the volume to do things differently (use different volume, decrease files, etc.) - and chances are you're not going to be able to do that....

Bill

rozle_palcar
6,520 Views

Thank you for info. I just needed another opinion, because this is first NetApp system at this customer and we really don't want to cause some performance issue (we are looking forward that they will also use NetApp as production system).

As Bill mentioned, we actually don't have anything else to do, since they can't put any more data in volume which is almost 60% free. We will double number of inodes and then change again if needed.

Thanks to both of you!

Regards,

Rozle

Public