We will open a case on this but am checking if anyone has seen this before. The error is below and users are complaining of slow performance. Has anyone seen this before and has anyone submitted a request to have more hard links since this is apparently a fixed limit. In over 8 years working on NetApp I have never hit this before so not a typical issue.
wafl.dir.link.approachingLimitTrap:warning]: The /vol/volname/path1/path2/path3/path4/path5 directory is approaching the maximum link limit of 100000. Reduce the number of links to the existing parent directory.
I was not aware of a maximum link limit. Two BURTS on now help explain this. I don't suspect they have 100k hard links created to path5, however from another burt (second one below) every new directory below path5 creates a hard link to path5 with ".." I suspect they have near 100k directories below path5 (it is an upload directory) but we will confirm with the customer.
I saw the same message in my environment recently. It was caused by a directory with a huge number of subdirectories. In our case it was easy to clean up, because it was caused by a script that went out of control.
Is this a CIFS "user" area or some NFS application? This might be a bit off-topic, but anything creating 100k links seems to be a bit broken. I can't help but wonder if alternative mounting methods (DFS/AMD, etc...) might get you out of your situation (assuming the behavior of the people/software can't be changed to do things differently) faster than fixing ONTap. Hard/Soft links can be a nice shortcut for many situations, but they will always cause overhead and even on normal UFS filesystems, they eat tons of inodes and obfuscate problem situations terribly, not to mention backup and restore situations where links don't get backed up or get backed up many times, depending on configuration or brokenness of the app/user. File lookups in filesystems with high numbers of files will generally benefit from directory substructures using a number of directories that is a prime number because of the benefits it gives with hash table lookups. Anyway, I digress.
I think you probably are going to break things most anywhere with 100k links, but that's just my 2 cents.
This simply means that you have too many files in your directory. The message about "hard links" is a bit misleading.
Every directory can only contain 100.000 files/subdirs max. This has to do with OnTap having to read the entire directoryfile in when accessing the directory, and with 100.000 files in one directory you're aproaching a few megs per directory, which is a lot if you have to constantly read/write it (e.g. for atime updates)
There's no (documented/supported) way you can override this limit.
I agree, to have more than 100k folders is probably not the best setup. I keep running into this because I'm in the middle of a CIFs migration. Windows allows more than 100k folders but as I Robocopy from window to NetApp CIFs I occasionally run into this limit and I'm just looking for a way for Data Ontap to tell me via Ops Mgr.
I have a follow-up question. One directory on our linux box has hit 100k sub-directories. Since there is a limit for 100k hard links, I am not able to write any new file/subdirectory.That's fine, but, will I be able to write a symbolic/link in the same directory.
My question is can we have symbolic links after the 100k hard link limit in the same directory.
Again, you're mixing things up. The 100k entries per directory limit as nothing to do with hard vs. soft links. It simply means you cannot have more than 100000 entries in one directory (minus the 2 entries that are always there for . and .., so in reality you can only have 99998 files). It really doesn't matter if it's hardlinks, softlinks, subdirectories, or anything else. The limit is 99998 *entries* in the WAFL directory file.
The other limit that was mentioned above (Bug 292410) is something different, this one talks about hardlinks to one file (which can reside in different directories), and there, too, is a limit of 100000 hardlinks.