VMware Solutions Discussions

NFS 10GB Performance

DEF__HEAD
10,541 Views

Hi All

We have 2 x FAS3240 HA 8.1.2-7 Mode both with 10GB ports that we have created a shared multiple vif from e1a and e1b. MTU has been set to 9000 on Netapp, Cisco (10GB) switches and VMWare Hosts (followed vendor deployments guides). I have presented the NFS mounts to the hosts using VSC and within VMWare created some IO Analyzer VM's to test IO perf and network throughput. We are testing 1 nic on Netapp for max throughput testing.

Using VMWare IO Analyzer

maximum network bandwidth to the NFS storage for 10 minutes at 817MB/s (6.3Gb)

Using PerfMon transfer from VM to VM

2 CPU’s and 4G memory (CPU @ 88%)

Socket size 384KB

Message size 64KB

Achieved 9.56 Gb/s

Note: This is within the same VM host

Perfmon on different VM hosts (transferring via Nexus switches)

2 CPU’s and 4G memory (CPU @ 70%)

Socket size 384KB

Message size 64KB

Achieved 9.0 Gb/s

Note: This is from two different hosts using the Nexus switches for transport.

This test is trying to max out 1 NIC on the host

Am i getting the maximum throughout 800MB/s out of these ports? or should i be getting much higher?

Images attached.

Thank you

2 REPLIES 2

krejkrejkrej
10,421 Views

I wonder whether you're possibly confused over bytes (B) and bits (b)? The Netapp and Cisco uses 10 gigabit/s interfaces. Transporting 9 gigabit/s on 10G interfaces is quite nice, and you can't expect much more.

At > 800 MB/s on NFS, it may very well be the storage system that limits transfers. Run a "sysstat -x 5" on the 3240 while doing the tests, and check cpu and disk utilization.

officeworks
10,421 Views

has anyone done a similar test with 10Gb with cluster mode? I have yet to see a single lif do more than 5.4Gb/s which may start you to question the benefit of having more than a single 10Gb link to host a lif

Public