ONTAP Hardware

CPU utilization is up to 90%,how to solve/reduce the situation ?

fenghui
7,178 Views

Hello Experts,

ontap:9.6

protocol:nfs

storage:fas8040/8060

i input 'sysstat -x 'to check system, CPU util is up to 90% for long time. volume access has so slowly. ‘qos statistics latency show' display 190ms. 'systat -M’ display 16 core cpu all up to 90%. The disk untility is lower than 70%. please advice how to solve/reduce CPU uitil to 50% ? 

 

Francis

 

 

5 REPLIES 5

fenghui
7,131 Views

I collect some log, hope it helpful for analysis. 

CPU NFS CIFS HTTP Total Net kB/s Disk kB/s Tape kB/s Cache Cache CP CP_Ty CP_Ph Disk OTHER FCP iSCSI FCP kB/s iSCSI kB/s NVMF kB/s kB/s
in out read write read write age hit time [T--H--F--N--B--O--#--:] [n--v--p--f] util in out in out in out
97% 45794 0 0 45800 1607550 951021 688419 1102440 0 0 29s 99% 100% 0--0--0--0--1--0--0--0 1--0--0--0 68% 6 0 0 0 0 0 0 0 0 0
100% 41906 0 0 41907 1469841 905452 869731 1125529 0 0 29s 98% 100% 0--0--0--0--0--0--0--1 1--0--0--0 75% 1 0 0 0 0 0 0 0 0 0
97% 47771 0 0 47771 1108234 590052 632573 1088814 0 0 29s 99% 100% 0--0--0--0--1--0--0--0 1--0--0--0 65% 0 0 0 0 0 0 0 0 0 0
100% 39958 0 0 40017 1260668 579370 871061 1045921 0 0 29s 98% 100% 0--0--0--0--0--0--0--1 1--0--0--0 67% 59 0 0 0 0 0 0 0 0 0
97% 41784 0 0 41800 1222726 529305 729755 1253559 0 0 29s 99% 100% 0--0--0--0--0--0--1--0 0--0--0--1 69% 16 0 0 0 0 0 0 0 0 0
98% 40047 0 0 40047 1006285 771472 778180 917472 0 0 29s 98% 100% 0--0--0--0--1--0--0--0 1--0--0--0 59% 0 0 0 0 0 0 0 0 0 0
96% 36756 0 0 36756 830556 495753 836575 1430014 0 0 29s 99% 100% 0--0--0--0--0--0--0--1 0--0--0--1 86% 0 0 0 0 0 0 0 0 0 0
76% 54370 0 0 54392 983572 377186 3080 0 0 0 29s 100% 1% 0--0--0--0--0--0--0--0 0--0--0--0 7% 22 0 0 0 0 0 0 0 0 0
73% 42481 0 0 42483 861597 1026608 281932 300824 0 0 29s 100% 27% 0--0--1--0--0--0--0--0 1--0--0--0 25% 2 0 0 0 0 0 0 0 0 0
95% 28778 0 0 28792 501035 410278 1081926 1387461 0 0 29s 97% 100% 0--0--0--0--0--0--0--1 1--0--0--0 88% 14 0 0 0 0 0 0 0 0 0
76% 32588 0 0 32777 860143 533471 348740 644872 0 0 41s 98% 90% 0--0--0--0--0--0--0--0 0--0--0--0 35% 189 0 0 0 0 0 0 0 0 0
70% 37486 0 0 37486 1542578 814396 31136 36 0 0 41s 99% 0% 0--0--0--0--0--0--0--0 0--0--0--0 13% 0 0 0 0 0 0 0 0 0 0
90% 38520 0 0 38525 841470 490096 753672 870113 0 0 37s 99% 69% 0--0--1--0--0--0--0--0 1--0--0--0 56% 5 0 0 0 0 0 0 0 0 0
95% 49138 0 0 49145 988982 1150310 843920 967964 0 0 37s 99% 100% 0--0--0--0--0--0--0--1 0--0--0--1 59% 7 0 0 0 0 0 0 0 0 0
76% 54925 0 0 54940 854260 1147340 181744 238236 0 0 37s 100% 100% 0--0--0--0--0--0--0--1 0--0--0--1 21% 15 0 0 0 0 0 0 0 0 0
64% 47177 0 0 47177 580922 848503 77072 266916 0 0 45s 100% 77% 0--0--0--0--0--0--0--0 0--0--0--0 18% 0 0 0 0 0 0 0 0 0 0
66% 48238 0 0 48241 930949 1125224 51068 24 0 0 45s 100% 0% 0--0--0--0--0--0--0--0 0--0--0--0 13% 3 0 0 0 0 0 0 0 0 0
67% 35069 0 0 35070 912849 829035 27320 0 0 0 1 100% 0% 0--0--0--0--0--0--0--0 0--0--0--0 15% 1 0 0 0 0 0 0 0 0 0

 

ANY1+ ANY2+ ANY3+ ANY4+ ANY5+ ANY6+ ANY7+ ANY8+ ANY9+ ANY10+ ANY11+ ANY12+ ANY13+ ANY14+ ANY15+ ANY16+ AVG CPU0 CPU1 CPU2 CPU3 CPU4 CPU5 CPU6 CPU7 CPU8 CPU9 CPU10 CPU11 CPU12 CPU13 CPU14 CPU15 Nwk_Lg Nwk_Exmpt Protocol Storage Raid Raid_Ex Xor_Ex Target Kahuna WAFL_Ex(Kahu) WAFL_MPClean SM_Exempt Exempt SSAN_Ex Intr Host Ops/s CP
100% 100% 100% 100% 100% 100% 99% 98% 96% 93% 88% 80% 71% 60% 49% 38% 86% 87% 86% 86% 87% 86% 86% 87% 86% 84% 85% 85% 85% 86% 85% 86% 85% 0% 672% 0% 0% 0% 29% 5% 0% 0% 554%( 36%) 14% 0% 66% 0% 24% 6% 45960 25%
100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 99% 99% 98% 97% 100% 100% 100% 100% 100% 100% 100% 99% 100% 100% 99% 100% 100% 100% 100% 100% 100% 0% 526% 0% 0% 0% 220% 113% 0% 0% 430%( 28%) 186% 0% 89% 0% 19% 9% 31370 100%
100% 100% 100% 100% 100% 99% 98% 96% 93% 89% 83% 75% 66% 57% 48% 39% 84% 77% 85% 85% 85% 86% 86% 86% 83% 84% 85% 84% 85% 85% 84% 85% 84% 0% 623% 0% 2% 0% 82% 39% 0% 0% 417%( 27%) 42% 0% 97% 0% 26% 15% 38616 100%
100% 100% 100% 100% 100% 100% 99% 98% 95% 90% 83% 72% 59% 47% 36% 27% 82% 73% 83% 83% 83% 83% 83% 84% 83% 81% 81% 81% 82% 83% 82% 82% 83% 0% 741% 0% 0% 0% 80% 32% 0% 0% 338%( 22%) 0% 0% 82% 0% 28% 4% 43081 100%
100% 100% 100% 100% 100% 99% 98% 96% 93% 88% 81% 72% 62% 51% 40% 30% 82% 73% 83% 83% 84% 83% 84% 83% 83% 82% 82% 82% 83% 83% 82% 83% 82% 0% 686% 0% 0% 0% 81% 34% 0% 0% 388%( 25%) 0% 0% 87% 0% 27% 4% 40917 79%
100% 100% 100% 100% 99% 97% 94% 89% 82% 72% 61% 49% 37% 27% 18% 11% 71% 59% 73% 73% 74% 74% 73% 73% 73% 72% 71% 69% 70% 72% 72% 72% 72% 0% 667% 0% 0% 0% 24% 0% 0% 0% 346%( 23%) 0% 0% 61% 0% 27% 13% 37519 0%
100% 100% 100% 100% 99% 97% 92% 85% 74% 61% 46% 31% 20% 14% 10% 7% 65% 51% 48% 70% 70% 69% 70% 70% 69% 65% 66% 64% 65% 66% 68% 68% 66% 0% 688% 0% 0% 0% 19% 0% 0% 0% 256%( 17%) 0% 0% 46% 0% 27% 5% 39857 0%
100% 100% 100% 100% 100% 100% 100% 100% 99% 98% 97% 95% 92% 88% 84% 79% 96% 94% 95% 96% 96% 96% 96% 96% 96% 96% 96% 96% 96% 96% 96% 96% 96% 0% 556% 0% 0% 0% 173% 82% 0% 0% 421%( 28%) 181% 0% 90% 0% 22% 4% 33755 100%
100% 100% 100% 100% 100% 99% 99% 97% 95% 90% 84% 75% 65% 54% 44% 33% 84% 75% 75% 85% 86% 86% 86% 86% 86% 84% 84% 85% 84% 85% 85% 84% 85% 0% 691% 0% 0% 1% 58% 26% 0% 0% 381%( 25%) 65% 0% 75% 0% 26% 12% 38179 100%
100% 100% 100% 100% 100% 99% 97% 94% 89% 82% 73% 63% 51% 41% 33% 25% 78% 69% 69% 80% 80% 81% 80% 81% 80% 80% 78% 79% 80% 80% 80% 78% 79% 0% 677% 0% 0% 0% 81% 28% 0% 0% 344%( 22%) 3% 0% 81% 0% 26% 11% 46719 100%
100% 100% 100% 100% 100% 99% 99% 98% 96% 93% 90% 85% 79% 73% 67% 58% 90% 85% 85% 91% 91% 91% 91% 91% 91% 90% 90% 90% 90% 90% 91% 91% 91% 0% 606% 0% 0% 0% 118% 58% 0% 0% 538%( 35%) 0% 0% 90% 0% 22% 4% 46064 100%
100% 100% 100% 100% 100% 100% 99% 99% 98% 96% 92% 88% 83% 76% 69% 62% 91% 87% 86% 92% 93% 92% 93% 93% 92% 91% 92% 92% 92% 92% 92% 92% 92% 0% 666% 0% 2% 0% 115% 54% 0% 0% 470%( 31%) 40% 0% 77% 0% 24% 12% 37165 100%
100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 100% 99% 98% 97% 96% 95% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 99% 0% 589% 0% 0% 0% 246% 126% 0% 0% 312%( 20%) 169% 0% 105% 0% 23% 12% 27353 100%
100% 100% 100% 100% 100% 100% 99% 98% 96% 93% 88% 81% 73% 63% 53% 44% 87% 81% 81% 89% 88% 89% 88% 89% 88% 87% 88% 87% 87% 88% 88% 88% 88% 0% 693% 0% 0% 0% 100% 48% 0% 0% 385%( 25%) 48% 0% 83% 0% 28% 3% 41898 100%
100% 100% 100% 100% 100% 99% 98% 95% 90% 84% 75% 64% 53% 42% 31% 22% 79% 68% 68% 80% 80% 81% 82% 82% 80% 79% 80% 80% 78% 80% 79% 79% 80% 1% 583% 0% 0% 0% 29% 16% 0% 0% 521%( 34%) 0% 0% 72% 0% 21% 11% 49313 70%

fenghui
6,978 Views

qos statistics latency show ,the colume ‘DATA’ is too hign latency.

paul_stejskal
6,951 Views

Ok first, start with the commands in that KB article, and read through it to figure out what is causing it. If it is a volume listed, then you will need to reduce, modify, or move that workload, or throttle with QoS.

SpindleNinja
7,097 Views

Looks like you're spindle bound. 

Public