Hi folks -- I have an IBM NAS 6210 that I've recently deployed for testing purpose, to use as an NFS server...My current NFS solution, is a Linux RHEL 5.7 server attached to a SAN, and then presenting the SAN-attached file-system as an NFS export.
What I don't understand is this: the throughput rates to sequentially write and read a large file (8GB -- 4x the size of the physical memory on the client) are about what I'd expect on both the local disk, the old NFS client file-system, and the new (NAS) NFS client file-system. However the throughput rates of running the random workload ( "-i 8" is the iozone option) are very surprising. I'd expect them to be slower than the sequential read/write measurements, and they are dramatically slower: as a generic example from "100 MB/s" to "10 MB/s", But on the NAS, the random workload is quite close to the sequential workload -- say, only dropping to '80 MB/s" from "100 MB/s." Is there any sort of caching architecture on the NAS that can explain this amazing performance? I'd be interested in a high-level discussion of what might be going on. Thanks!