Software Development Kit (SDK) and API Discussions

How do I run the file-list-directory-iter API against the root vol of a cluster node?

SCOTT_LINDLEY
9,228 Views

How do I run the file-list-directory-iter API against the root vol of a cluster node? Every time I try I receive an error such as "Unable to find API: file-list-directory-iter on node vserver". How does one invoke this API against a cluster node server's root volume? I've had no issues invoking it against a data vserver's volumes. Please advise.

1 ACCEPTED SOLUTION

zulanch
9,224 Views

Hi Scott,

This is probably not a supported operation, but you can query the node root volumes using the old 7-mode file-list-directory-iter-start/next/end APIs.

-Ben

View solution in original post

12 REPLIES 12

rle
NetApp Alumni
9,192 Views

Hi Scott,

The API is file-list-directory-iter and it is a Vserver or VSM API which means that you need to invoke the API for each Vserver in your cluster.

Regards,

   - Rick -

SCOTT_LINDLEY
9,191 Views

I'm not having any trouble running it against any other vServer, but I can't get it to run against a node, that being the "physical" Filer. I need to be able to see into it's root volume, but the API won't let me connect. I get the following error:

Unable to find API: file-list-directory-iter on node vserver

I am able to run this against the root volume, or any contained volume for that matter, that resides on a data vserver. Only the node vserver is balking.

     Scott

zulanch
9,225 Views

Hi Scott,

This is probably not a supported operation, but you can query the node root volumes using the old 7-mode file-list-directory-iter-start/next/end APIs.

-Ben

SCOTT_LINDLEY
9,191 Views

WOW! Never thought of that! I'll try it and post the results.

I will be on vacation for a week starting tomorrow, so it'll be over a week before I can test and report back.

Thanks for the tip! If this works, we'll have some happy campers here!

     Scott

SCOTT_LINDLEY
9,191 Views

I finally got the change to test this, and WOW it works! I started testing it a couple of weeks ago and it failed miserably so I played with apitest this morning, and it took off running. I don't know what I screwed up last time, but if you treat it like a 7-mode vFiler it works fine.

Now for the bummer. Now that I've got the code working to get the list of files and decide which ones to delete, I find that the following APIs aren't present on the cluster node:

volume-get-root-name

file-delete-file

file-delete-directory

Ugh! Is there ANY consistency in cDOT?

rle
NetApp Alumni
9,191 Views

Hi Scott,

What was your exact test?  Running <API X> on a node vserver?

volume-get-root-name, file-delete-file, and file-delete-directory are available on the vserver only, not at the cluster level.  Only vservers have access to data.

Regards,

   - Rick -

SCOTT_LINDLEY
9,191 Views

I tried something like this:

apitest.pl  -t filer -v CLUSTER_NODE -s CLUSTER 'USERNAME' 'PASSWORD' file-delete-file path /vol/NODE_ROOTVOL/etc/crash/man.dump

I got this back:

OUTPUT:
<results reason="Unable to find API: file-delete-file on node vserver CLUSTER_NODE" status="failed" errno="13005"></results>

I can run "file-list-directory-iter-start" in the same fashion, and it succeeds. But file-delete-file, file-delete-directory, and volume-get-root-name all fail with similar error messages to the above. I've since found that I also cannot run 'system-api-list', so I can't even list what APIs I CAN run! Frustrating.

SCOTT_LINDLEY
9,191 Views

For "completeness" I also tried:

apitest.pl  -t filer -s CLUSTER_NODE 'USERNAME' 'PASSWORD' file-delete-file path /vol/NODE_ROOTVOL/etc/crash/man.dump

and got the extremely similar error

OUTPUT:
<results reason="Unable to find API: file-delete-file" status="failed" errno="13005"></results>

Disappointing but not unexpected

Ben-nan
8,132 Views

hello, 

i am new starter to use ontap api. i want to use file-list-directory start/next/end/ to list files of a foder(20+ million). When i enter tag to next/end from previous result of iter-start. it show error below:

 

Results errno='13001' reason='Unable to open zapi iterator next file: /etc/.zapi/141528826958583814.next, error=No such file or directory' status='failed'

 

Any idea? 

SCOTT_LINDLEY
5,775 Views

There are a couple of possibilities.

 

One is that you may be waiting too long between calls to file-list-directory-next, as the NAS will only hold those intermediate results for 'so long'.

 

The other is that you are calling file-list-directory-start, then calling file-list-directory-next with the tag you got from file-list-directory-start, then calling file-list-directory-next with the same tag you got from start. You have to make each file-list-directory-next call using the tag from the previous call, whether it was "start" or "next". Each subsequent call to file-list-directory-next gives you a new tag.

 

The above

 

Hope this helps,

    Scott

SCOTT_LINDLEY
5,474 Views

The "next" file for a *-iter call only persists for "so long" (don't know how long for sure). I've run into this when the script takes to long to process the items from the previous call. In these cases, I had to hang into all results, then process them after the last '*-next' call is made. This can suck up a bit of memory, but isn't much of an issue if you are on a Linux host. Can't speak much for Windows (don't get me started!), as its memory management isn't as robust or sensible. 'Nuff said...

 

It's also worth noting that with 7-mode, the 'tag' value is the same for all '*-next' calls, but that with cDOT its value changes with each iteration and must be retrieved each time in anticipation of the following '*-next' call.

 

Hope this helps!

 

    Scott

SCOTT_LINDLEY
5,246 Views

The most common reason for this happening is that you waited too long between the previous invocation of the "*-start" or "*-next" calls. If you want too long the Filer will discard the intermediate result file in order to not leave them  "aying around" in the event that you don't come back for more.

 

Please note that iterating through 20+ million files via the API is going to take a LONG time, possibly all day or more. It's not fast. The more objects you ask for the faster it will go to some extent, though if you ask for too many that will blow up the call as well. I've settled on 1024 for almost all "*-iter" type calls. For this call you will also need to be able to process the number of objects that comes in fast enough that the ZAPI doesn't time out, but not so small that it makes the scan take even longer.

 

Good luck!

 

    Scott

 

P.S.: Oops - looks like I already responded to this. My bad! Could be worse...

Public