On my FAS2554 cluster running Data Ontap 8.2.3P9 I reached maxdirsize on one directory of a volume.
For now I increased the maxdirsize value for this volume and now I am archiving some old data just to solve the immediate problem, but I need to definitively solve this problem to avoid it in the future.
The problem is that I cannot predict how many files will be added to this directory, just because they are thousands of new static files each days, automatically created depending on new contents created by the systems (they are attachments of contents). The value of new content every day is not regular.
I was thinking about organizing these files in a subdirectories tree, but NetApp Support told me that this would not solve the problem as the maxdirsize is calculated on the parent directory and all subdirectories are taken for its calculation.
So how I can solve this problem without creating new volumes? I want to apply a scalable way, and I wish the only limitation to be the volume size and not any other parameters I could not manage automatically.
Each subdirectory only counts as one file under that directory. If you have one file per subdirectory you will have the same problem, but if you group files into fewer subdirectories you won't have an issue.
For the below layout, Dir2 and Dir3 will take up directory space of Dir1, but File1-File4 will not. They will take it only from their parent directory (Dir2/Dir3). You can test this with the ls -l command and add files and watch the directory size of the parent.
drwxr-xr-x 3 username group 34 Jun 6 09:34 testdir
The 34 is the directory size... add a directory with in test and add files to that subdirectory and you will see the directory size of testdir will not change.