VMware Solutions Discussions

nfs performance

ANTON_GLUSHAKOV
5,060 Views

Hi all. This is my first post.

We have HP Dl380g7 with SFP+ 10Gb adapter (Intel Ethernet Server adapter X520-DA2) which connect with FAS3240 directly.

We create aggregate with 23 disks, create volume and export it with NFS

After that, we installed on server ESXi 4.1, connected datastore from FAS and installed Windows 2008 as guest OS and start storage performance testing

We use IOMeter with different options block size and get results about 4000-4500 iops.

Analogous test we made on local servers disk(create from 4 disk in raid 10) and get result 8000iops

Also we install Virtual Storage Console, which check our configuration and tune NFS settings, but results still same.

We disappointed. Is it normal results?

Thanks!

6 REPLIES 6

radek_kubka
5,060 Views

Hi Anton and welcome to the Community!

I would say your performance results are absolutely fine:

- 23 drives in an aggregate means at best 21 'active' spindles (2 for parity if this is a single RAID-DP group), 200 random IOPS per disk -> 4200 total IOPS expected

No idea how you were able to squeeze 8k IOPS from 2 local spindles (two others are just a mirror)! Some caching must have kicked in, otherwise it is not technically possible (unless local disks are SSDs ).

Regards,

Radek

MATTBAKER
5,060 Views

RAID 10 also has a much smaller IO penalty than RAID DP. With RAID DP you generate 3x the IO operations that you would if you were using RAID 10. This is because the array has to update parity data on two stripes per raid group, whereas there's no parity penalty with RAID 10. I would recommend setting up a second test on your server using RAID ADG, which is HP's implementation of RAID DP.

Matt

radek_kubka
5,060 Views

Matt,

This is not (exactly) as NetApp RAID-DP works. ONTAP  will at least try to cache multiple small writes in NVRAM until full stripe could be written to multiple disks at once - hence performance penalty is minimised.

Regards,
Radek

MATTBAKER
5,060 Views

It may be minimized, but there's still a large penalty vs. RAID 10. Comparing HP RAID ADG vs. Netapp RAID-DP is a much closer approximation, even though Netapp makes far more efficient use of the cache.

radek_kubka
5,060 Views

Define 'large'?

Incidentally, NetApp always uses RAID-DP in their performance benchmarks (whilst most other vendors use RAID-10, indeed), including these which can be found here:

http://www.storageperformance.org/results/benchmark_results_spc1

Jeff_Yao
5,060 Views

what i want to say is "do you consider the average operation size?".

u said the IOPS which is I/O per second. but how big is your IO? 2k, 4k, 8k?

for nfs thru filer, it might be 32k(default on vm)...

but i dont know what's the size of ur local server....it might be only 1k even less.......

so totally data we write to the disks is the one we should consdier.....

Public