Most if not all people, from time to time, need to delete files on their system(s) and I'm wondering what the most effective and fastest way is to do so. We are mostly working this out via CIFS or NFS. NFS being the fastest with rm but I was wondering if anyone has an idea to enhance this?
I've experienced myself that with a FAS3140 System we run into issues (response time went down) while someone deleted multiple (30-60) 30GB zip files at once.
I found it pretty strange that there's no internal command to delete files yet on Data ONTAP... maybe one of you know if this is on the roadmap for the future. At least I haven't read about it yet.
Maybe I'm looking at this from the wrong perspective? Maybe there's arguments not to delete files? We clean up from time to time, safe's us space & money in the end.
Thank & Regards,
Solved! See The Solution
It's not clear to me as to what files you are referring to... user files on volumes or Ontap-related files in the root volume. I'm thinking you are referring to files in the root volume, but I can't imagine how you would have 30-60 30GB zip files in there. Anyway, for files in the root volume, we usually clean up by mounting the /etc folder as a CIFS share and deleting files via a Windows host. We have not seen any performance issues doing this, but then again we don't have dozens of 30GB files to delete.
No I was not referring to the root volume files. Tbh. I wouldn't know what to delete on there apart from a few .log files in below the /etc directory that might get to big.
I'm talking about user data and wanted to see what other peoples experience is in deleting files. As I said maybe that's not something people actually really do with their storage systems but with how we use our FAS it's very common to remove dated files so we have a clean state.
Ahh gotcha... we have not noticed any performance issues with deleting files from CIFS or NFS, but I don't think we delete on the scale that you do. Do you have a lot of snapshots? deduplication? We find deleting lots of snapshots at a time will slow down our performance, but that's about it.
I don't think Ontap will ever provide a user-level way to delete files as it's not really Ontap's business to "manage" user files.
We are regularly deleting files (every month or so) on a scale of multiple 100gb's where data is old and doesn't need to be on the system anymore with NFS. Yes we do have lot's of big snapshots because we run a flexclone on some volumes for a non-prod environment we recreate every month and I can see ANY cpu going up but it's not really an issue.
I especially noticed a drop when someone (as already explained) deleted multiple 30GB zip files at once on a fas3140. Currently we are on a 3250 and this hasn't occured anymore but it would be very useful to be able to remove files on the command line (user-level) so you wouldn't be needing to interact with network and any protocol inbetween.
Just baffles me how you got some possibilities like ls, rdfile but not a simple du, rm, mkdir or the likes. They'd be very useful. Just wasn't sure if people are on the same page or aren't really dependant on them? Maybe anyone can shed some light on why this is like this?
Users must have NFS or CIFS access, otherwise they would not be able to place files on NetApp. So they can also use the same access to delete them. Unless I misunderstand the problem.
What the OP is saying is that when they delete a large number of huge files (30-60 30GB zip files) the filer's performance goes down and he wanted to know if there was a way to delete the user files using Ontap command instead of host commands.
I understand so my use case of the filer is in the minority it seems? Because I thought lot's of people regularly delete files & folders or are summarizing folder sizes etc... Probably it's not on the regularity & size we are doing it...
Effect of Data ONTAP command would be the same. Effects of large deletes are due to block reclamation and housekeeping, not due to protocol used to delete files.