ONTAP Discussions
ONTAP Discussions
Hi,
I need some advise. We have a new Netapp FAS2552 with Data ONTAP 8.3.1P1 which has two LUNs on raid6, one is from SAS and the second is from SSD disks.
The both LUNs are connected by multipath to couple of XenServer 6.5 over 8Gbps FC.
Everything is working great except performance of filesystems on SSD. I tried to test performance of virtual filesystems from linux and windows and the results were almost the same, so I guess the issue is not related to VM.
Google is full of ideas that everything else of raid1 or raid10 on SSD is performance disaster, but Data ONTAP allow me to create only raid4 and raid-dp (raid 6) arrays.
Do you have some advice what I missed? Thanks a lot.
Solved! See The Solution
Hi, Netapp sell 2554 with SSD and when you complain about SSD performance, they said 2554 is a low end model and you should buy the higher model which has more CPU. The 2554 is limited by CPU to provide optimal performance with SSD. That is advise directly from Netapp.
How big is your SSD Aggr?
Hi @JGPSHNTAP
the SSD aggr is from 7x 400GB SSD as one raid-dp + 1x spare. All space is dedicated for XenServer.
Hi @asulliva
ok, I'll describe it in details. I moved our huge DB and SOLR apps to SSD and we didn't notice any rapid improvements, so I went for testing SAS vs SSD.
We use CentOS 6.7 as VM in XenServer 6.5, I attached one 20GB FS from SAS LUN and the second one from SSD LUN, 20GB as well. First of all I disabled all caches on VM as:
sync; echo 3 > /proc/sys/vm/drop_caches
I tested both FS with iozone with O_DIRECT feature, I tried diff file size but I will paste statistics for 1GB file and diff block sizes.
Results from Xen FS mounted as /mnt/sas, created on LUN from SAS aggr:
Iozone: Performance Test of File I/O Version $Revision: 3.434 $ Compiled for 64 bit mode. Build: linux-AMD64 Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa, Alexey Skidanov. Run began: Thu Jan 28 10:54:20 2016 Auto Mode O_DIRECT feature enabled File size set to 1048576 kB Command line used: /opt/iozone/bin/iozone -a -I -s 1g -f /mnt/sas/test Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 1048576 4 2407 4524 11770 11247 1602 4528 11187 4637 11335 824989 1019893 2097269 2108950 1048576 8 4808 8333 21866 21902 1970 8387 19980 8477 20423 1172515 1478819 3058757 3066505 1048576 16 8794 14918 37052 35695 4193 15944 39035 17266 34981 1447991 1887540 3914044 3914904 1048576 32 16438 27473 64588 62208 8524 28241 67426 30727 69550 1643718 2308494 4630722 4677574 1048576 64 28821 44122 90370 99425 14530 46141 115955 52210 112972 1795544 2434902 5072888 5110288 1048576 128 49998 68656 127044 132931 25200 66784 137641 86565 127569 1867787 2500110 4890767 4895084 1048576 256 59295 95655 173110 171419 35374 87807 198956 117810 188044 1964890 2719159 4957199 5036557 1048576 512 95510 102086 252159 269248 53485 129259 225975 179361 354612 1939308 2636070 5082203 5035882 1048576 1024 158335 150673 229718 297767 86007 148578 290356 225694 353143 1628236 2116098 5114627 5188426 1048576 2048 157959 162465 322907 333238 124457 159807 188848 230751 421981 1492250 1846718 4934241 5028564 1048576 4096 165384 172769 351697 385456 216458 178818 217808 268873 312890 1367384 1768435 4740158 4742244 1048576 8192 167972 186220 394493 359762 261051 189302 250701 274369 349293 1306122 1557074 3454252 3441125 1048576 16384 163538 188526 367646 414351 319475 194663 270279 317466 378682 1288690 1527761 3208824 3250348 iozone test complete.
Results from Xen FS mounted as /mnt/ssd, created on LUN from SSD aggr:
Iozone: Performance Test of File I/O Version $Revision: 3.434 $ Compiled for 64 bit mode. Build: linux-AMD64 Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins Al Slater, Scott Rhine, Mike Wisner, Ken Goss Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR, Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner, Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone, Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root, Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer, Vangel Bojaxhi, Ben England, Vikentsi Lapa, Alexey Skidanov. Run began: Thu Jan 28 13:33:25 2016 Auto Mode O_DIRECT feature enabled File size set to 1048576 kB Command line used: /opt/iozone/bin/iozone -a -I -s 1g -f /mnt/ssd/test Output is in kBytes/sec Time Resolution = 0.000001 seconds. Processor cache size set to 1024 kBytes. Processor cache line size set to 32 bytes. File stride size set to 17 * record size. random random bkwd record stride kB reclen write rewrite read reread read write read rewrite read fwrite frewrite fread freread 1048576 4 2631 4498 11065 12442 6420 4564 12368 5107 12343 814619 988754 2075438 2074979 1048576 8 5523 9443 22041 21073 9591 8604 19590 9367 21633 1164260 1485054 3068433 3054464 1048576 16 9472 16488 39491 41895 18754 16578 43030 17681 39783 1426672 1928628 3863494 3884751 1048576 32 18086 28772 68651 73725 33310 30043 76374 33323 69213 1632403 2170232 4489943 4550418 1048576 64 30982 48023 113187 118352 53527 48088 115848 55982 115540 1801001 2455337 4914541 4958038 1048576 128 53914 70535 123458 135770 56458 66182 144178 85769 133518 1905804 2579421 4993718 4991077 1048576 256 74765 93640 170640 182817 78229 91154 187381 119750 194858 1954551 2654528 4990765 4928259 1048576 512 114930 118205 308444 324404 110998 125608 260175 180883 339721 1959691 2615960 4889218 4985860 1048576 1024 146171 144075 334040 364039 151489 153638 225434 238507 470527 1645504 2123944 5090049 5207390 1048576 2048 141683 160714 407527 476160 250148 169753 280901 299889 578804 1452419 1806565 4974530 5005921 1048576 4096 156249 172249 446444 497289 350113 183895 391211 267520 553456 1380032 1717465 4747347 4733317 1048576 8192 139301 177699 534872 497975 429912 162182 427456 277298 500148 1280176 1611310 3511535 3484741 1048576 16384 149426 186364 523130 518921 458423 170016 463315 215153 499715 1299448 1640304 3274978 3256131 iozone test complete.
And yes, there are no QoS policies.
Those are perf statistics from FAS2552 during my tests with iozone:
SAS:
Write: CPU NFS CIFS HTTP Total Net kB/s HDD kB/s SSD kB/s Tape kB/s Cache Cache CP CP HDD SSD OTHER FCP iSCSI FCP kB/s iSCSI kB/s in out read write read write read write age hit time ty util util in out in out 47% 3748 0 0 6037 6775 4256 1530 0 0 0 0 0 9 100% 0% - 3% 0% 421 1868 0 721 112025 0 0 37% 233 0 0 2306 8049 3555 10713 19485 0 0 0 0 9 100% 43% Tf 6% 0% 3 2070 0 2192 118387 0 0 42% 223 0 0 1779 7624 4776 315 48495 0 0 0 0 9 100% 100% :f 9% 0% 16 1540 0 162497 32699 0 0 80% 226 0 0 1350 4797 3024 4282 152687 0 0 0 0 9 100% 98% Fn 30% 0% 3 1121 0 200273 31 0 0 83% 410 0 0 2004 14560 9097 3767 217687 0 0 0 0 9 100% 100% :v 40% 0% 187 1407 0 207806 79 0 0 83% 237 0 0 1284 7265 7143 2833 231896 0 0 0 0 9 100% 100% Bs 42% 0% 9 1038 0 186358 26 0 0 81% 115 0 0 1314 6083 4617 8577 265737 0 0 0 0 9 100% 100% Bn 49% 0% 27 1172 0 189370 42 0 0 82% 86 0 0 1820 7985 5975 3690 178050 0 0 0 0 9 100% 98% : 35% 0% 416 1318 0 170606 67 0 0 85% 175 0 0 1479 8616 5866 4670 166644 0 0 0 0 9 100% 81% Fs 29% 0% 225 1079 0 192430 28 0 0 80% 103 0 0 1484 7257 6307 5883 266931 0 0 0 0 9 100% 95% Fn 50% 0% 6 1375 0 199028 85 0 0 80% 136 0 0 1436 11509 6769 5085 235628 0 0 0 0 9 100% 100% :f 42% 0% 0 1300 0 219511 168 0 0 87% 202 0 0 1282 8967 8620 3722 267325 0 0 0 0 9 100% 98% Fs 47% 0% 25 1055 0 181347 30 0 0 78% 141 0 0 1957 7024 5457 5320 192260 0 0 0 0 9 100% 92% Fn 37% 0% 8 1808 0 200654 111 0 0 55% 134 0 0 580 8763 4802 4234 216544 0 0 0 0 9 100% 100% :v 41% 0% 6 440 0 68804 12 0 0 27% 156 0 0 1475 8363 6353 880 24 0 0 0 0 8s 99% 1% : 4% 0% 0 1319 0 62081 89 0 0 27% 174 0 0 1161 7818 7051 28 0 0 0 0 0 8s 98% 0% - 1% 0% 1 986 0 62071 25 0 0 39% 322 0 0 1150 7788 6832 5239 33593 0 0 0 0 10 99% 23% Fn 9% 0% 19 809 0 47708 42 0 0 39% 408 0 0 1343 6613 6550 3968 106272 0 0 0 0 10 99% 100% :f 20% 0% 10 925 0 49993 34 0 0 30% 274 0 0 1168 7972 5482 1604 94928 0 0 0 0 10 98% 100% :f 17% 0% 0 894 0 53900 22 0 0 35% 321 0 0 1777 13812 6242 1616 93044 0 0 0 0 0s 98% 100% : 21% 0% 190 1266 0 58353 100 0 0 Read: CPU NFS CIFS HTTP Total Net kB/s HDD kB/s SSD kB/s Tape kB/s Cache Cache CP CP HDD SSD OTHER FCP iSCSI FCP kB/s iSCSI kB/s in out read write read write read write age hit time ty util util in out in out 62% 176 0 0 683 6216 6101 8980 111696 0 0 0 0 0s 99% 98% Ff 38% 0% 16 491 0 58606 47 0 0 35% 165 0 0 835 7451 6249 5476 102152 0 0 0 0 0s 100% 100% :f 22% 0% 17 653 0 75593 21 0 0 41% 172 0 0 1298 6037 5980 6536 102144 0 0 0 0 0s 100% 100% :f 20% 0% 0 1126 0 88117 77 0 0 32% 166 0 0 804 6719 5228 6935 26130 0 0 0 0 0s 100% 31% Fn 10% 0% 2 636 0 76174 15 0 0 62% 197 0 0 1042 6831 4903 8219 101157 0 0 0 0 9 99% 100% :f 28% 0% 0 845 0 60438 90 0 0 38% 353 0 0 988 10004 11262 5112 97280 0 0 0 0 9 100% 100% :f 19% 0% 0 635 0 76632 17 0 0 35% 166 0 0 865 5134 5823 6170 97215 0 0 0 0 9 100% 100% :f 23% 0% 15 684 0 81042 23 0 0 33% 126 0 0 1015 5457 5378 5083 35690 0 0 0 0 0s 100% 42% F0 12% 0% 0 889 0 76976 51 0 0 61% 207 0 0 691 5564 5714 10151 100946 0 0 0 0 9 99% 100% :f 22% 0% 0 484 0 55476 16 0 0 40% 254 0 0 1553 4469 4423 7372 92440 0 0 0 0 9 99% 100% :f 21% 0% 191 1108 0 82704 83 0 0 31% 203 0 0 1276 5732 5530 98792 111872 0 0 0 0 0s 100% 100% :f 27% 0% 0 1073 0 13735 103953 0 0 27% 191 0 0 1272 8825 8657 124240 33664 0 0 0 0 0s 100% 39% : 18% 0% 15 1066 0 293 126289 0 0 25% 241 0 0 1595 7257 5907 123184 24 0 0 0 0 0s 100% 0% - 8% 0% 0 1354 0 1992 125350 0 0 39% 1785 0 0 2827 8669 6782 118617 0 0 0 0 0 0s 100% 0% - 10% 0% 0 1042 0 2355 121188 0 0 37% 1452 0 0 3273 9170 10465 124776 0 0 0 0 0 0s 100% 0% - 15% 0% 9 1812 0 14839 124659 0 0 35% 3738 0 0 4951 5580 5724 132680 24 0 0 0 0 0s 100% 0% - 10% 0% 79 1134 0 295 135268 0 0 31% 2564 0 0 3718 7515 6129 132616 8 0 0 0 0 0s 100% 0% - 8% 0% 16 1138 0 407 135596 0 0 21% 125 0 0 1451 6044 6250 52376 0 0 0 0 0 0s 100% 0% - 4% 0% 0 1326 0 962 136017 0 0 27% 1922 0 0 3187 6475 6293 79280 24 0 0 0 0 0s 100% 0% - 8% 0% 0 1265 0 880 150734 0 0 40% 2544 0 0 4390 9231 6985 138880 0 0 0 0 0 0s 100% 0% - 14% 0% 244 1602 0 20385 141347 0 0 CPU NFS CIFS HTTP Total Net kB/s HDD kB/s SSD kB/s Tape kB/s Cache Cache CP CP HDD SSD OTHER FCP iSCSI FCP kB/s iSCSI kB/s in out read write read write read write age hit time ty util util in out in out 34% 1833 0 0 3646 6005 6013 146948 0 0 0 0 0 0s 100% 0% - 10% 0% 535 1278 0 384 150474 0 0 55% 2728 0 0 3773 10638 7492 129697 29174 0 0 0 0 0s 100% 59% Tf 18% 0% 0 1045 0 664 123045 0 0 33% 360 0 0 1705 9206 7494 133492 58368 0 0 0 0 0s 100% 100% :f 19% 0% 1 1344 0 1674 136263 0 0 36% 359 0 0 1914 12162 9656 124752 53504 0 0 0 0 0s 99% 100% :f 21% 0% 463 1092 0 2190 126027 0 0 38% 252 0 0 2725 7131 9339 131296 50208 0 0 0 0 1s 99% 100% :f 30% 0% 1144 1329 0 4662 131385 0 0 47% 125 0 0 5808 5930 5845 98350 41678 0 0 0 0 1s 100% 81% : 14% 0% 4504 1179 0 271 140109 0 0 15% 443 0 0 956 4751 5373 37752 8 0 0 0 0 1s 98% 0% - 8% 0% 1 512 0 381 58198 0 0
SSD:
Write: CPU NFS CIFS HTTP Total Net kB/s HDD kB/s SSD kB/s Tape kB/s Cache Cache CP CP HDD SSD OTHER FCP iSCSI FCP kB/s iSCSI kB/s in out read write read write read write age hit time ty util util in out in out 28% 859 0 0 2568 4116 410 488 0 70688 0 0 0 0s 100% 0% - 7% 1% 0 1709 0 512 191005 0 0 25% 783 0 0 2604 3749 344 2526 32 62833 16 0 0 0s 99% 6% Ts 5% 0% 66 1755 0 422 190958 0 0 56% 707 0 0 2596 3231 250 6506 57857 56669 8697 0 0 13 99% 100% :f 17% 0% 0 1889 0 240290 48244 0 0 84% 417 0 0 1688 2226 166 1592 3654 85369 214066 0 0 13 99% 93% Fs 7% 3% 0 1271 0 174707 30179 0 0 79% 336 0 0 1797 1772 142 2591 5256 91402 262846 0 0 13 100% 87% Fn 8% 3% 17 1444 0 202384 30323 0 0 88% 316 0 0 1431 1743 198 59 0 99925 389161 0 0 13 100% 100% :s 1% 5% 0 1115 0 154381 23787 0 0 79% 293 0 0 1540 1528 97 2233 5277 90747 190336 0 0 13 100% 69% Fn 6% 5% 0 1247 0 167812 26585 0 0 82% 393 0 0 2494 1939 134 2762 3164 77067 256426 0 0 12 100% 100% :v 17% 3% 791 1310 0 171449 32858 0 0 87% 204 0 0 2010 1246 65 4469 2118 92698 277680 0 0 12 99% 73% Fs 20% 5% 685 1121 0 147883 27113 0 0 72% 381 0 0 2018 1744 220 2185 3507 80181 155440 0 0 12 99% 79% F0 7% 2% 22 1615 0 223444 44284 0 0 90% 365 0 0 1449 1583 121 1340 2502 102751 326020 0 0 12 100% 100% :s 3% 6% 1 1083 0 144782 24588 0 0 79% 407 0 0 1801 2449 129 3357 6130 93909 237775 0 0 12 100% 92% Fn 9% 5% 0 1394 0 186086 38411 0 0 85% 957 0 0 2225 1732 260 76 32 87881 296044 0 0 11 99% 100% :s 3% 5% 0 1268 0 174379 24952 0 0 86% 4308 0 0 5463 2710 1001 2771 5672 84818 243209 0 0 11 99% 100% Bn 9% 5% 0 1155 0 146436 25579 0 0 50% 862 0 0 1803 4362 281 2653 2849 82542 204681 0 0 11 97% 57% : 7% 2% 198 743 0 45774 38361 0 0 33% 1215 0 0 2774 5444 470 104 24 60184 0 0 0 10 96% 0% - 3% 0% 0 1559 0 88835 50827 0 0 31% 881 0 0 2277 6904 279 100 0 60388 0 0 0 9 99% 0% - 3% 0% 2 1394 0 83070 50832 0 0 61% 757 0 0 1794 5353 329 3108 23520 108192 219232 0 0 0s 97% 97% Ff 8% 4% 2 1035 0 58698 37141 0 0 39% 1007 0 0 2420 5410 329 16 528 86868 107372 0 0 0s 95% 100% :f 4% 1% 0 1413 0 82615 55316 0 0 Read: CPU NFS CIFS HTTP Total Net kB/s HDD kB/s SSD kB/s Tape kB/s Cache Cache CP CP HDD SSD OTHER FCP iSCSI FCP kB/s iSCSI kB/s in out read write read write read write age hit time ty util util in out in out 66% 807 0 0 2326 3995 292 1751 3013 232677 200414 0 0 0s 99% 94% : 7% 4% 1 1518 0 44625 210992 0 0 42% 950 0 0 2900 4356 404 140 0 331561 0 0 0 0s 100% 0% - 5% 2% 13 1937 0 811 332558 0 0 44% 930 0 0 3029 3761 329 176 32 357872 0 0 0 0s 99% 0% - 5% 2% 0 2099 0 970 355814 0 0 42% 821 0 0 3117 3543 259 168 0 284936 0 0 0 0s 99% 0% - 3% 1% 1 2295 0 720 391858 0 0 43% 524 0 0 2646 2399 210 120 0 363424 0 0 0 0s 100% 0% - 1% 2% 1 2121 0 1165 363169 0 0 42% 590 0 0 2704 2636 199 48 24 348304 0 0 0 0s 99% 0% - 3% 2% 0 2114 0 959 356259 0 0 34% 697 0 0 2136 4043 219 4 0 157688 0 0 0 0s 94% 0% - 1% 2% 18 1421 0 599 241312 0 0 32% 948 0 0 1906 4212 409 164 8 162585 0 0 0 2 90% 0% - 2% 2% 2 956 0 686 161018 0 0 28% 1196 0 0 2007 6040 415 156 24 126458 0 0 0 2 91% 0% - 3% 2% 1 810 0 412 133832 0 0 26% 706 0 0 1722 3285 283 556 0 124224 0 0 0 2 91% 0% - 7% 2% 186 830 0 1048 134212 0 0 31% 796 0 0 1700 4301 251 1741 8 136458 16 0 0 2 91% 5% Tn 6% 2% 2 902 0 638 149497 0 0 44% 817 0 0 1797 3817 301 7200 51292 129568 28828 0 0 2 92% 100% :f 18% 2% 13 967 0 904 154334 0 0 34% 279 0 0 1509 1613 188 72 4748 161136 64736 0 0 0s 92% 100% :f 7% 2% 1 1229 0 1500 199148 0 0 41% 3008 0 0 4138 3624 698 184 64 30012 47640 0 0 0s 98% 70% : 4% 1% 2 1128 0 163421 37240 0 0 61% 788 0 0 1831 3511 295 2010 11127 28032 65137 0 0 2 100% 42% Fn 7% 2% 19 1024 0 166074 76 0 0 62% 889 0 0 1898 4140 289 2211 6394 68489 250258 0 0 2 100% 100% :f 7% 3% 2 1007 0 164940 485 0 0 77% 648 0 0 1348 2815 275 1860 4530 72202 322932 0 0 2 100% 87% Fn 8% 6% 19 681 0 114332 16 0 0 59% 602 0 0 1593 3087 282 4842 6582 38518 215484 0 0 0s 100% 87% : 11% 2% 0 991 0 169396 25 0 0 71% 503 0 0 1254 3163 173 866 3937 59692 227103 0 0 0s 100% 57% Fn 5% 6% 4 747 0 127448 18 0 0
I checked the SAN as well, to see the tests from other side. We have a two redundant FC switches Brocade 5450 on latest FOS 7.3.1b. I did some performance monitoring of all ports on both switches and ran the test against SAS LUN and SSD LUN.
I ran both iozone tests from VM Centos 6.7 on XenServer 6.5 on P3. The Netapp FAS2552 is on P21 and P22.
When I ran 1st test against SAS, which is LUN on aggr1 and owner is node1, you can see traffic is going through P21, which is right:
When I ran 2nd test against SSD, which is LUN on aggr3 and owner is node2, you can see traffic is going through P22, the results is a little bit better, but not the SSD expected values:
https://drive.google.com/file/d/0B-MNK1FCwbctUWZQTzRXanVTbGs/view?usp=sharing
I don't know why, but XenServer or VM is able to generate traffic around 1Gbps from 8Gbps available or Netapp is able to accept only 1Gbps from 8Gbps. The servers and storage is located in one rack, I checked the BB credits and all ports has value 8. There is no Frames Busied, no Frames Rejected and zero Total Errors on all ports.
At this point, I would suggest a perfstat and to open a case with Netapp support.
Hello @pahl,
Can you clarify what metric you are using to gauge performance? Throughput, IOPS, latency?
You said you tested performance of "virtual file systems from Linux and Windows and the results were almost the same". Can you describe the testing you did? This will help to determine what bottleneck might be occurring.
Lastly, it's safe to assume there are no QoS policies in place?
Thanks!
Andrew
Hi,
Any news on this? We see a similar thing in our environment.
Thanks!
Gert
Similar issue here and case opened, awaiting more information.
Hi, Netapp sell 2554 with SSD and when you complain about SSD performance, they said 2554 is a low end model and you should buy the higher model which has more CPU. The 2554 is limited by CPU to provide optimal performance with SSD. That is advise directly from Netapp.