If you're just looking to find the number of files in a directory, you can use XCP to figure out the file count.
xcp scan -l -stats NFS-server:/volume/folder/path
XCP is free and can be found at http://xcp.netapp.com.
Here's a comparison of XCP's speed vs. find.
XCP:
Total count: 500,005
Directories: 2
Regular files: 500,003
Symbolic links: None
Special files: None
Hard links: None,
multilink files: None,
Space Saved by Hard links (KB): 0
Sparse data: N/A
Dedupe estimate: N/A
Total space for regular files: size: 50.4 GiB, used: 52.4 GiB
Total space for symlinks: size: 0, used: 0
Total space for directories: size: 48.8 MiB, used: 49.0 MiB
Total space used: 52.4 GiB
Xcp command : xcp scan -l -stats 10.193.67.219:/flexgroup_16/files
Stats : 500,005 scanned
Speed : 90.0 MiB in (1.12 MiB/s), 444 KiB out (5.53 KiB/s)
Total Time : 1m20s.
STATUS : PASSED
find:
# time find /flexgroup/files -type f | wc -l
500003
real 13m57.454s
user 0m1.886s
sys 0m34.219s
With XCP scan, you also get file size info, file age, etc.
For example:
== Maximum Values ==
Size Used Depth Namelen Dirsize
4.63 GiB 4.65 GiB 2 15 500,002
== Average Values ==
Namelen Size Depth Dirsize
14 106 KiB 2 250,002
== Top Space Users ==
root
52.4 GiB
== Top File Owners ==
root
500,005
== Top File Extensions ==
.dat .log .iso .out
500,000 1 1 1
== Number of files ==
empty <8KiB 8-64KiB 64KiB-1MiB 1-10MiB 10-100MiB >100MiB
1 500,000 1 1
== Space used ==
empty <8KiB 8-64KiB 64KiB-1MiB 1-10MiB 10-100MiB >100MiB
47.7 GiB 42.3 MiB 4.65 GiB
== Directory entries ==
empty 1-10 10-100 100-1K 1K-10K >10K
1 1
== Depth ==
0-5 6-10 11-15 16-20 21-100 >100
500,005
== Accessed ==
>1 year >1 month 1-31 days 1-24 hrs <1 hour <15 mins future
1 100,596 312,452 86,954
== Modified ==
>1 year >1 month 1-31 days 1-24 hrs <1 hour <15 mins future
1 115,870 312,452 71,680
== Changed ==
>1 year >1 month 1-31 days 1-24 hrs <1 hour <15 mins future
1 115,870 312,452 71,680
And if you use XCP 1.6 or later, you can use File Systems Analytics - that will keep a running tally of the directories/sizes/files.
In a future release, the file analytics will be available natively in System Manager.
Keep in mind that local processing of these operations will always be faster because there's no network contention to deal with. With any network-based protocol, there's a back and forth conversation that has n amount of latency, depending on the network health. There's also processing that adds latency to the request on the NFS server side. With local, it's much faster because you have less round trip time to deal with.
The benefit of using NFS is for performance when *many* clients need to run operations against the same data. That can't be done locally on the same datasets, and if clients connect to other clients to run processes, the clients will bottleneck much sooner than a storage system.