How do I run the file-list-directory-iter API against the root vol of a cluster node? Every time I try I receive an error such as "Unable to find API: file-list-directory-iter on node vserver". How does one invoke this API against a cluster node server's root volume? I've had no issues invoking it against a data vserver's volumes. Please advise.
Solved! See The Solution
I'm not having any trouble running it against any other vServer, but I can't get it to run against a node, that being the "physical" Filer. I need to be able to see into it's root volume, but the API won't let me connect. I get the following error:
Unable to find API: file-list-directory-iter on node vserver
I am able to run this against the root volume, or any contained volume for that matter, that resides on a data vserver. Only the node vserver is balking.
WOW! Never thought of that! I'll try it and post the results.
I will be on vacation for a week starting tomorrow, so it'll be over a week before I can test and report back.
Thanks for the tip! If this works, we'll have some happy campers here!
I finally got the change to test this, and WOW it works! I started testing it a couple of weeks ago and it failed miserably so I played with apitest this morning, and it took off running. I don't know what I screwed up last time, but if you treat it like a 7-mode vFiler it works fine.
Now for the bummer. Now that I've got the code working to get the list of files and decide which ones to delete, I find that the following APIs aren't present on the cluster node:
Ugh! Is there ANY consistency in cDOT?
What was your exact test? Running <API X> on a node vserver?
volume-get-root-name, file-delete-file, and file-delete-directory are available on the vserver only, not at the cluster level. Only vservers have access to data.
- Rick -
I tried something like this:
apitest.pl -t filer -v CLUSTER_NODE -s CLUSTER 'USERNAME' 'PASSWORD' file-delete-file path /vol/NODE_ROOTVOL/etc/crash/man.dump
I got this back:
<results reason="Unable to find API: file-delete-file on node vserver CLUSTER_NODE" status="failed" errno="13005"></results>
I can run "file-list-directory-iter-start" in the same fashion, and it succeeds. But file-delete-file, file-delete-directory, and volume-get-root-name all fail with similar error messages to the above. I've since found that I also cannot run 'system-api-list', so I can't even list what APIs I CAN run! Frustrating.
For "completeness" I also tried:
apitest.pl -t filer -s CLUSTER_NODE 'USERNAME' 'PASSWORD' file-delete-file path /vol/NODE_ROOTVOL/etc/crash/man.dump
and got the extremely similar error
<results reason="Unable to find API: file-delete-file" status="failed" errno="13005"></results>
Disappointing but not unexpected
i am new starter to use ontap api. i want to use file-list-directory start/next/end/ to list files of a foder(20+ million). When i enter tag to next/end from previous result of iter-start. it show error below:
Results errno='13001' reason='Unable to open zapi iterator next file: /etc/.zapi/141528826958583814.next, error=No such file or directory' status='failed'
There are a couple of possibilities.
One is that you may be waiting too long between calls to file-list-directory-next, as the NAS will only hold those intermediate results for 'so long'.
The other is that you are calling file-list-directory-start, then calling file-list-directory-next with the tag you got from file-list-directory-start, then calling file-list-directory-next with the same tag you got from start. You have to make each file-list-directory-next call using the tag from the previous call, whether it was "start" or "next". Each subsequent call to file-list-directory-next gives you a new tag.
Hope this helps,
The "next" file for a *-iter call only persists for "so long" (don't know how long for sure). I've run into this when the script takes to long to process the items from the previous call. In these cases, I had to hang into all results, then process them after the last '*-next' call is made. This can suck up a bit of memory, but isn't much of an issue if you are on a Linux host. Can't speak much for Windows (don't get me started!), as its memory management isn't as robust or sensible. 'Nuff said...
It's also worth noting that with 7-mode, the 'tag' value is the same for all '*-next' calls, but that with cDOT its value changes with each iteration and must be retrieved each time in anticipation of the following '*-next' call.
Hope this helps!
The most common reason for this happening is that you waited too long between the previous invocation of the "*-start" or "*-next" calls. If you want too long the Filer will discard the intermediate result file in order to not leave them "aying around" in the event that you don't come back for more.
Please note that iterating through 20+ million files via the API is going to take a LONG time, possibly all day or more. It's not fast. The more objects you ask for the faster it will go to some extent, though if you ask for too many that will blow up the call as well. I've settled on 1024 for almost all "*-iter" type calls. For this call you will also need to be able to process the number of objects that comes in fast enough that the ZAPI doesn't time out, but not so small that it makes the scan take even longer.
P.S.: Oops - looks like I already responded to this. My bad! Could be worse...
NetApp Wins One Silver and One Bronze Stevie® Award in 2022 Stevie Awards for Sales and Customer Service