2011-02-24 03:36 AM
thanks for your input, but why then if I setup normal server with Windows 2008 (physical one), and connect from another server via \\servername\c$, and get a file (or write a file), then I get speed 100-120MB/s constantly? I often get that question from my customers, and I can't really explain that behavior.
Do you have valid explanation to that? I would appreciate it - maybe i would understand that more
2011-02-24 04:17 AM
This might be a matter of window size or packet size of your LAN connection or the NIC itself. Test your network connection with iperf using smaller and greater window size (parameter -w4K and -w128K). With 4k you get 300Mbps, while with 128k almost a full bandwidth. Moreover, non-server (ex. laptop) 1Gbps NICs can't usually do better then 400mbps. BTW, the interesting thing you achieve 100MB/s with Windows 2008. We can't get so much with 2003.
2011-02-24 04:54 AM
I am glad you started understanding the netapp systems
2011-02-24 04:32 PM
Couple of things, we've been playing around with the 6280's for 3 or 4 months, and i have to say we've seen pretty spetacular NFS performance with them (peak 2.5-3GB/s off disk much higher off cache) and about 1GB/s straight onto disk. (144 spindles)
Couple of things, firstly sysstat -M 1 doesnt really work anymore, i'm assuming its because alot of the subsystems now reside in BSD, things like networking, raid and wafl (i believe, dont quote me on that)
Effectively with the way NVRAM is deployed you'll only get the benefit if you were running single systems i've been told that with clustering you're getting 2GB per head with the mirroring which will stay the same with the upgrade. you'll have 4Gb to play with single node.
with 1TB of PAM in there, the access to another 48Gb of memory probably isnt going to make a difference majorly to throughput.
A statit will give you much better/accurate information about how the cores are loaded up.
We personally havent been able to break it (other than a few software bugs we found in testing) we find the performance is pretty linear even with a couple of thousand workstations hammer a single node (deliberating trying to kill the node), obviously the total single node throughput decreases but the response times were very good.
A perfstat and/or a statit would help.
2011-02-24 04:52 PM
Re the TOE thing, i heard there were driver dramas. Again its all just hearsay. We've been using TOE on our 6080's for quite a while, for us it was the difference between it bottoming out at about 400MB/s and sustained 700MB/s with line speed peaks. the biggest pain with TOE was the lack of trunk support, we used RR DNS to work around it which wasnt to bad.
We're still trying to see whats going on with 8.0.1 and stateless offload, it seems interesting but we cant really measure the improvement. To be honest we havent tried, but we probably should one day.
2011-02-25 02:52 AM
well, dd command is pretty common:
dd if=/dev/zero of=/tmp/testfile bs=1M count=2000
creates 2gb file
nfs mount is from ESX host, so basically default one, but we also did the same test on following nfs mount (physical linux server):
netapp:/vol/nfs1 /vhosts nfs defaults,noatime,hard,intr 0 0
2011-02-25 12:01 PM
Hi marek, i've launched 17 streams of dd from 17 different servers against the 6280, and i've reached 408MByte/sec writing, and 621MByte/sec reading. I have to say that our FAS is not optimized for throughtput but just for random IOs and doesn't have many disks loops (two controllers are installed in different places so we need to reduce fiber patches). This caused a B in sysstat CP field.
Probably with an higher number of loops performance could be increased.