Network and Storage Protocols
Network and Storage Protocols
average 54.6 MB/s is normal speed for GigaEthernet ? ( I have just FAS2020 -> SwitchEnterasysC3(HighSpeedGigabitEthernet) -> Linux(broadcom GigabitEthernet) )
When I tried to use dd to force write data in FAS2020 the process is used 100%, this is one bottleneck problem or this is normal ?
ISCSI
Like above
[root@oraclesrv mnt]# dd if=/dev/zero of=/dev/sda bs=5M count=1000 conv=notrunc
1000+0 records in
1000+0 records out
5242880000 bytes (5.2 GB) copied, 95.9385 seconds, 54.6 MB/s
filer2> sysstat -i -s 2
CPU NFS CIFS iSCSI Net kB/s Disk kB/s iSCSI kB/s Cache
in out read write in out age
4% 8 0 0 227 24 22 0 0 0 10
1% 19 0 0 5 82 576 16 0 0 10
4% 0 0 0 2 4 12 12 0 0 10
74% 316 0 82 43072 1042 1130 39734 39213 0 7
94% 0 0 117 61546 1371 666 74028 58655 0 5
99% 124 0 104 56693 1362 728 65542 53117 0 5
99% 0 0 112 62428 1400 744 74212 59507 0 5
99% 0 0 112 64355 1435 768 79680 61213 0 5
96% 9 0 112 62747 1419 1016 71034 59441 0 5
99% 0 0 110 58443 1324 414 64812 55640 0 5
99% 0 0 122 64568 1439 1002 77548 61506 0 5
95% 10 0 95 55751 1275 1386 67932 52691 0 5
100% 0 0 104 52392 1176 782 59652 49807 0 5
Hi umonteiro
I may have the same issue as yours, but I would love to get 55MB/s I get only 15/20MBs
My config is FAS2050 (ONTAP 7.2.6.1) with 2 controllers / 2 aggregates / (1) aggregate has only 3 disk, the other one (2) has 16 Disks + 1 Spare.
All my production is on the aggregate 2, only 3 ESX servers connected to it.
My aggregate (2) has 11 volumes- as you will see only 3 of them are only - They are NFS volume.
My NAS/SAN switch is a Dell 2724 1GB ports
I believe I have a performance issue, I'm evaluating a backup solution name Veeam Backup 4.0 and the performance I get is low 15/20MB/s
db01 | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
dstore01 | online,raid_dp,sis | aggr0 | - | 276 GB | 51% | 560 GB | 716 | 24.2 m | ||
dstore02 | online,raid_dp,sis | aggr0 | - | 492 GB | 12% | 560 GB | 376 | 24.2 m | ||
exchdb | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
exchlog | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
fs01 | online,raid_dp,sis | aggr0 | - | 180 GB | 64% | 500 GB | 139 | 19 m | ||
sqldb01 | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
sqldb02 | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
sqllogs01 | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
sqllogs02 | offline,raid_dp | aggr0 | - | - | - | - | - | - | ||
vol0 | online,raid_dp | aggr0 | - | 172 GB | 0% | 172 GB | 5.11 k | 7.45 m |
During a backup I see this with sysstat, it looks pretty weird for me, and seems to act as a Diesel
I may get rid of the iSCSI volume I did create a long time ago, and then expand my NFS volume will this help my performance ?
sto-fas02*> sysstat
CPU NFS CIFS HTTP Net kB/s Disk kB/s Tape kB/s Cache
in out read write read write age
3% 69 0 0 240 356 666 402 0 0 2
3% 73 0 0 278 116 607 827 0 0 2
2% 62 0 0 325 100 374 414 0 0 3
6% 437 0 0 2066 492 713 773 0 0 3
6% 185 0 0 814 74 514 2470 0 0 3
3% 117 0 0 458 38 282 491 0 0 3
5% 311 0 1 277 156 430 814 0 0 4
6% 354 0 0 550 1456 1681 595 0 0 4
19% 586 0 0 513 17383 18385 876 0 0 4
18% 846 0 0 747 14889 12753 405 0 0 4
15% 517 0 0 512 13114 13486 672 0 0 5
24% 864 0 0 712 20726 21227 1203 0 0 5
24% 695 0 0 1238 27682 12851 1314 0 0 5
31% 681 0 0 639 42056 7107 797 0 0 21s
18% 414 0 0 458 20463 20154 440 0 0 33s
18% 415 0 0 395 19792 19593 771 0 0 45s
18% 409 0 0 427 21262 20330 357 0 0 57s
17% 405 0 0 352 20404 19746 358 0 0 1
19% 561 0 0 475 20125 20248 639 0 0 1
18% 572 0 0 620 22212 20805 576 0 0 1
16% 365 0 0 426 21039 20251 813 0 0 1
12% 281 0 0 286 16311 15486 345 0 0 2
12% 279 0 0 323 15853 15336 650 0 0 2
16% 376 0 0 403 21498 20395 366 0 0 2
13% 300 0 0 355 16603 16013 664 0 0 2
12% 292 0 0 276 16612 15895 311 0 0 2
19% 455 0 0 515 25375 22111 444 0 0 3
32% 734 0 0 784 44725 42458 714 0 0 2
31% 742 0 0 702 45766 43340 602 0 0 2
31% 738 0 0 644 45724 43410 710 0 0 4
32% 775 0 0 746 46959 44521 398 0 0 4
32% 758 0 0 724 46655 44337 843 0 0 3
30% 734 0 0 664 45463 42988 332 0 0 3
30% 695 0 0 582 43532 41256 614 0 0 3
30% 696 0 0 703 42801 40308 338 0 0 3
15% 383 0 0 847 14154 14238 778 0 0 5
3% 209 0 0 334 146 467 1044 0 0 5
2% 41 0 0 173 63 379 430 0 0 5
6% 524 0 0 541 1011 1278 837 0 0 3
What is going on ?
What can I do to troubleshot this, and of course increase my NAS performance ?
Thanks
Backups jobs tend to do a lot of sequential reads, so I'd recommend running realocate command to a) check the level of volume fragmentation, b) do the defrag if required.
Regards,
Radek
CPU is incredibly busy which makes me ask this question - are jumbo frames enabled end to end?
Here are some previous discussions around this topic:
http://communities.netapp.com/thread/6052
http://communities.netapp.com/thread/6000
Regards,
Radek
Radek,
a) check the level of volume fragmentation and defrag
Are you speaking about the NEtapp volume ? if so, can you tell me how to check and defrag the volume ? can it be done during work hours ?
a) check the level of volume fragmentation and defrag
Are you speaking about the NEtapp volume ?
Yes, exactly.
Have you got access to SE communities? If so, check this out:
http://communities.netapp.com/thread/4431?tstart=0
Also some interesting reading here:
http://communities.netapp.com/message/12535#12535
And obviously reading on NOW site:
Regards,
Radek