Network and Storage Protocols

Benchmark questions -- Iozone, NAS6210, and amazing random workload performance...

STEVEK123
2,509 Views

Hi folks -- I have an IBM NAS 6210 that I've recently deployed for testing purpose, to use as an NFS server...My current NFS solution, is a Linux RHEL 5.7 server attached to a SAN, and then presenting the SAN-attached file-system as an NFS export.

What I don't understand is this: the throughput rates to sequentially write and read a large file (8GB -- 4x the size of the physical memory on the client) are about what I'd expect on both the local disk, the old NFS client file-system, and the new (NAS) NFS client file-system. However the throughput rates of running the random workload ( "-i 8"  is the iozone option) are very surprising. I'd expect them to be slower than the sequential read/write measurements, and they are dramatically slower: as a generic example from "100 MB/s" to "10 MB/s", But on the NAS, the random workload is quite close to the sequential workload -- say, only dropping to '80 MB/s" from "100 MB/s."   Is there any sort of caching architecture on the NAS that can explain this amazing performance? I'd be interested in a high-level discussion of what might be going on.  Thanks!

1 REPLY 1

STEVEK123
2,509 Views

In further investigation, I think this is simply a design feature of the NAS. I believe what's happening is the random writes are going to NVRAM on the device. Is there any way, with DATAONTAP to monitor or discover the size of this NVRAM if that's what's happening?

Public