I have a FAS 3040 with PAM card, and I have an NFS volume where my Data base admin dumps data. He then runs some jobs on this data and then deletes it. He does this for various data bases in sequence.
We are currently getting volume full errors. He is thnking that after he deletes the data, the filer takes a while to actually physicall delete the data, so he gets volume full when he tries to copy more data to the volume. Does any one know a way to verify this, or see logs of blockes being deleted on the filer? How fast can a filer delete data?
While the files are being deleted you can't use that space, that's the issue, the script detelets fiels, then tries writtting more data but the space hasn't ckeard yet for the new data. I opend a call with NetaApp and was tolf
"ONTAP is already as optimized as possible for deleting files. So much so that we have a bug where, when there are a very large number of files or very large files being deleted, it takes up all the processing time and other processes are put on hold until the deletes finish, causing the filer to appear unresponsive. See http://now.netapp.com/NOW/products/csb/csb0803-03.shtml
So it looks like this is just the way it is. I will look at creating a sperated volume or adjusting the scripts.