I am searching for the possibility with NetApp API and/or Powershell to do a quick scan on CIFS and NFS volume. My objective is to working with result offline?
The aim is to retrieve the list of files, created time, modified ... size
And then parse the results for statistical or identify the file to copy ...
In a Windows NTFS FS (NTFS Master File Table) or Linux EXTx (inode), it is possible to access and scan this table of pointers in order to quickly browse and recover these items without having to really scan the FS.
This is very useful for a FS with millions of files.
It still requires access from a client (vs API calls to the filer etc...) but you can also look at NetApp's XCP copy tool - https://xcp.netapp.com. It has some options to scan a file system and report on the metadata.
If I get your question right, you want a NetApp API way to check the "file meta" informations (aka. inode's), right?
If so, your (7-Mode) API is "file-list-directory-iter-start".. and you need to implement a "recursive" way of searching, if desired
<?xml version="1.0" encoding="UTF-8"?>
<netapp xmlns="http://www.netapp.com/filer/admin" version="1.19">
And if you have access to the filesystems with a NFS and/or SMB client, it's faster and smarter to use host OS tools, as other stated already...
thank for your answer Anton,
actually, costume use the unix cmd find in script
but for millions of file it is not quicly method
on filesystem ntfs, it is possible tu use tools scan like TreeSize.
this tools scan MFT (Master File Table). It is very quicly.
but this work only with local or san FS.
i not tested your solution, i going try it.
Great information, your right, i didn't consider that. Looking at those ZAPI's you can also use the following PowerShell CmdLets. EG
Get-Help Read-NaDirectory -full
Get-Help Get-NaFile -full
These could be combined to list all directories and files recursively...However keep in mind you would be potentially invoking a lot of ZAPI calls depending on the size of the file system. I'm not sure if this would be any quicker than scaning the file system. Either way it might have a performance impact on the system so you'd want to ensure you monitor that whilst running the script. It would be interesting to guage the performance of the directory and file listing by using measure-object on the ZAPI method compared to other methods mentioned here:
Sure It's possible to write a script to recursively enumerate a file system and log the results (assuming you run the script as a user who has administrative access throughout the entire file system) if such a user exists? Often NTFS permission inheritance is removed and users can remove administrative access in the event where they have been granted full control permissions, so the process you are suggesting it is prone to error and your script would need to implement a high degree of error checking to account for access denied errors. Depending on the number of files in your file system it could take a long time to run. I don't think there is any method available within WAFL to directly\quickly enumerate this information. You will need to peform a file system scan.
However, if your objective is to peform a migration of files to a destionation based on file age or date last access\modified then have you considered using robocopy with the following paramaters?
/CREATE :: CREATE directory tree and zero-length files only.
/MAXAGE:n :: MAXimum file AGE - exclude files older than n days/date.
/MINAGE:n :: MINimum file AGE - exclude files newer than n days/date.
/MAXLAD:n :: MAXimum Last Access Date - exclude files unused since n.
/MINLAD:n :: MINimum Last Access Date - exclude files used since n.
Robocopy is a good way to do it but I'd recommend using the /l parameter as well.
Another option I have been investigating is Powershell e.g. get-childitem -path h:\usergarbage -recurse | select Name,FullName,Length,CreationTime,LastAccessTime,LastWriteTime | export-csv test.csv