Subscribe

FAS 2040 disappointing sequential throughput and very high CPU usage

FAS 2040 disappointing sequential throughput and very high CPU usage

With “only” 220 MB/s sequential throughput measured with Iometer and filecopy over 2 x 1 Gbit (3 x 1 Gbit) ISCSI and MPIO (ISCSI Ontap DSM MCS) we are somewhat disappointed about performance.

(We used 128 KB sequential, 100 % read/write, 0% random, 4x workers and up to 4/8/16 outstanding I.O. As well small random workload behaved quite disappointing. Sequential small workload was very good (4-8 K).

We used a 48 Disk 64 bit Aggregate (2 x RG 24 Disks 600GB 10k RPM (2246 shelf)),with Ontap 8.0.2 and Window 2008 R2 as testing environment.

We cloud "clearly" establish that the main bottleneck is poor CPU Performance. Although, load was spread quite well on the 2 x cores on the FAS 2040 CPU, CPU was always 90-95 % and fully busy. The rest of the 10% is reserved for the kahuna domain (RAID… etc.). When checking with sysstat –x 1, sysstat –m 1 etc. and most useful the performance monitoring form DPM.. we cloud see that the individual disks in the plex, where never used more than ~20-40 % (RAID_DP/P disks little more)

The most disappoint is that together with VM Ware ESX 4.1i or 5.0 when using VM Clone, VMotion etc.(Even if active VAII) the CPU load goes very fast to 85-90% on all cores, and “disturbing” all other Netapp Services. We observed all problems as well with NFS.

So the point is that (example Qnapp etc.) is not performing much worse than a 2040A which coast much more. As well if you take SAN Storage from other manufactures, you get far better performance, mainly for sequent. There some achieve 800 MB/s. Yes we know 2040 is entry but still coast >50K in A/A config.

If Netapp would use more powerful CPU the 2040 cloud get at least 2-3 times better performance.

Which experiences you did? Concerning 2040 or may with 3270 ?

FAS 2040 disappointing sequential throughput and very high CPU usage

You can get in touch with netapp global support and open a performance case. They will analyze the system for proper configuration and if in any way your performance can be tuned. There are several options depending on configuration and layout which could have gone wrong.

Re: FAS 2040 disappointing sequential throughput and very high CPU usage

Very funny if someone can get more performance out of 2 GbE Channels.

1 Gbit/sek are brutto 125 MByte/sek.

2 channels cannot transport more than 250 MB/sek.

so 220 MB/sek. is quite good

And yes, CPU is often a limiting factor in deployments of NetApp solutions.

Re: FAS 2040 disappointing sequential throughput and very high CPU usage

You can try to enable jumbo frames to reduce cpu usage, less packets less overhead.

Regards

Re: FAS 2040 disappointing sequential throughput and very high CPU usage

Hello

We have similiar problem, FAS2040 connected to fabric SAN with VMware Esxi 4.1.

When we move or clone VM's cpu on FAS is around 90%, average copying speed is ~100MB/s.

We have 4gbit/s hba cards in esxi. On FAS we have 12x1TB SATA and 24x300 15k SAS.

This problem occurs on both controlers.

8.1RC3 7-Mode.

Any ideas how to decrease high cpu usage ?

Regards

Marcin

Re: FAS 2040 disappointing sequential throughput and very high CPU usage

Well it's no big secret that NetApps are not the best storages if all you need is sequential streaming I/O. You can get faster streamings with a cheap RAID array. But I wouldn't recommend running VMs on that, because the few SVMotions/Clones you'll do will not outweigh the typical day-to-day usage where random I/O is predominant. NetApp filers excels at random I/O where nothing else can match it's performance.

I have to agree with stemmer here, 220mb through 2 links is pretty good (110mb/sec is the practical cap through one GbE link, not 125)

-Michael

Re: FAS 2040 disappointing sequential throughput and very high CPU usage

This sounds like a support related question. If you have an active NetApp Support login, there are subject matter experts in the NetApp Support Community that may help answer your questions.

Regards,

Christine