Network and Storage Protocols

maximum link limit of 100000

scottgelb
8,223 Views

We will open a case on this but am checking if anyone has seen this before.  The error is below and users are complaining of slow performance. Has anyone seen this before and has anyone submitted a request to have more hard links since this is apparently a fixed limit.  In over 8 years working on NetApp I have never hit this before so not a typical issue.

wafl.dir.link.approachingLimitTrap:warning]: The /vol/volname/path1/path2/path3/path4/path5 directory is approaching the maximum link limit of 100000. Reduce the number of links to the existing parent directory.

I was not aware of a maximum link limit.  Two BURTS on now help explain this.  I don't suspect they have 100k hard links created to path5, however from another burt (second one below) every new directory below path5 creates a hard link to path5 with ".."  I suspect they have near 100k directories below path5 (it is an upload directory) but we will confirm with the customer.

ONTAP has a limit of 100k hard links per file (object)… not a snapvault issue, but the only place I can find where ONTAP has a hard 100k limit of hard links.
http://now.netapp.com/NOW/cgi-bin/bol?Type=Detail&Display=292410   

Each directory below the parent directory creates 2 hard links “.” to itself and “..” to the parent
https://now.netapp.com/Knowledgebase/solutionarea.asp?id=ntapcs3039

10 REPLIES 10

pascalduk
8,181 Views

I saw the same message in my environment recently. It was caused by a directory with a huge number of subdirectories. In our case it was easy to clean up, because it was caused by a script that went out of control.

robarthurn
8,181 Views

Have you had a response to this issue you had? Did your case resolved? Is there a solution? We just had a customer with exactly the same problem and they are unable to add or write to the file/dir.

garciam99
8,181 Views

Has anyone found an ALARM within Ops Manager to alert us of this limit?  I had a case open at one time..but no resolution.  

shaunjurr
8,181 Views

Hi,

Is this a CIFS "user" area or some NFS application?  This might be a bit off-topic, but anything creating 100k links seems to be a bit broken.  I can't help but wonder if alternative mounting methods (DFS/AMD, etc...) might get you out of your situation (assuming the behavior of the people/software can't be changed to do things differently) faster than fixing ONTap.  Hard/Soft links can be a nice shortcut for many situations, but they will always cause overhead and even on normal UFS filesystems, they eat tons of inodes and obfuscate problem situations terribly, not to mention backup and restore situations where links don't get backed up or get backed up many times, depending on configuration or brokenness of the app/user.  File lookups in filesystems with high numbers of files will generally benefit from directory substructures using a number of directories that is a prime number because of the benefits it gives with hash table lookups.  Anyway, I digress.

I think you probably are going to break things most anywhere with 100k links, but that's just my 2 cents.

Darkstar
8,181 Views

This simply means that you have too many files in your directory. The message about "hard links" is a bit misleading.

Every directory can only contain 100.000 files/subdirs max. This has to do with OnTap having to read the entire directoryfile in when accessing the directory, and with 100.000 files in one directory you're aproaching a few megs per directory, which is a lot if you have to constantly read/write it (e.g. for atime updates)

There's no (documented/supported) way you can override this limit.

-Michael

garciam99
8,181 Views

I agree, to have more than 100k folders is probably not the best setup.  I keep running into this because I'm in the middle of a CIFs migration.  Windows allows more than 100k folders but as I Robocopy from window to NetApp CIFs I occasionally run into this limit and I'm just looking for a way for Data Ontap to tell me via Ops Mgr.

SUJITHVADDI
8,181 Views

Hello all,

I have a follow-up question. One directory on our linux box has hit 100k sub-directories. Since there is a limit for 100k hard links, I am not able to write any new file/subdirectory.That's fine, but, will I be able to write a symbolic/link in the same directory.

My question is can we have symbolic links after the 100k hard link limit in the same directory.

Thanks so much for your time!

SRV.

Darkstar
8,181 Views

Again, you're mixing things up. The 100k entries per directory limit as nothing to do with hard vs. soft links. It simply means you cannot have more than 100000 entries in one directory (minus the 2 entries that are always there for . and .., so in reality you can only have 99998 files). It really doesn't matter if it's hardlinks, softlinks, subdirectories, or anything else. The limit is 99998 *entries* in the WAFL directory file.

The other limit that was mentioned above (Bug 292410) is something different, this one talks about hardlinks to one file (which can reside in different directories), and there, too, is a limit of 100000 hardlinks.

-Michael

garciam99
8,181 Views

I wonder if it is on the roadmap for NetApp to get past this error:

wafl.dir.link.approachingLimitTrap:warning]:

It doesn't sound like it's a priority at this time.  Support has nothing interesting to say about it.

PJRINZEMA
6,158 Views

Not to kick an old post. But since many people, like me, probably end up here after a google search.

The limitation is lifted in OnTap 8.1 accourding to:

https://kb.netapp.com/support/index?page=content&id=3012261

Posted yesterday.

Public