ONTAP Discussions
ONTAP Discussions
It is possible but could cause a slowdown in performance depending on how many disks are in the aggregate... You need at least 1 spare (2 is better) so adding 1 or 2 drives isn't typical best practice. We prefer to add a complete raid group at a time to an aggregate when growing it... Adding less drives can cause those new drives to run hot (higher utilization as they are written to first)... there is a "reallocation" command that can layout volumes but that takes some time and has some limitations (if running dedup you need to be on 8.1 to run realllocate with dedup and no reallocate if running compression). It really depends on how many drives you have in the aggr now and the layout and how many you plan to add. Also it looks like you have 500GB in the aggregate now and are going to add 1TB spares... ONTAP supports mixed sizes in an aggr or raid group, but isn't something I like to do... the bigger drive will swap with one of the smaller parity drives so you don't gain anything on the first drive added of the bigger drive size then additional larger drives can be added as data so some diminishing returns on usable along with the performance hit you may have.
If you email the output of "sysconfig -r" and "sysconfig -V" (second command can be determined from the first but easier to see layout of raid groups this way) the community will give several opinions on layout...some may be different but good to see the different opinions and best practices used by others.
Hi Scott,
appreciated here is the config of the filer
filer001> sysconfig -r
Aggregate aggr0 (online, raid_dp) (block checksums)
Plex /aggr0/plex0 (online, normal, active)
RAID group /aggr0/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0b.16 0b 1 0 FC:B - ATA 7200 423111/866531584 423889/868126304
parity 0a.32 0a 2 0 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0a.19 0a 1 3 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0a.33 0a 2 1 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0a.18 0a 1 2 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0b.34 0b 2 2 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0a.42 0a 2 10 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0b.27 0b 1 11 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0a.43 0a 2 11 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0b.28 0b 1 12 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0a.44 0a 2 12 FC:A - ATA 7200 423111/866531584 423889/868126304
Aggregate aggr1 (online, raid_dp) (block checksums)
Plex /aggr1/plex0 (online, normal, active)
RAID group /aggr1/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0a.35 0a 2 3 FC:A - ATA 7200 423111/866531584 423889/868126304
parity 0a.17 0a 1 1 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0b.36 0b 2 4 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0a.20 0a 1 4 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0b.37 0b 2 5 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0a.21 0a 1 5 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0a.22 0a 1 6 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0a.38 0a 2 6 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0a.23 0a 1 7 FC:A - ATA 7200 423111/866531584 423889/868126304
data 0b.39 0b 2 7 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0b.24 0b 1 8 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0b.40 0b 2 8 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0b.25 0b 1 9 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0b.41 0b 2 9 FC:B - ATA 7200 423111/866531584 423889/868126304
data 0b.26 0b 1 10 FC:B - ATA 7200 423111/866531584 423889/868126304
Aggregate aggr2 (online, raid_dp) (block checksums)
Plex /aggr2/plex0 (online, normal, active)
RAID group /aggr2/plex0/rg0 (normal)
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
dparity 0c.50 0c 3 2 FC:A - ATA 7200 847555/1735794176 847827/1736350304
parity 0c.58 0c 3 10 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.57 0c 3 9 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.56 0c 3 8 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.55 0c 3 7 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.53 0c 3 5 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.52 0c 3 4 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.51 0c 3 3 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.59 0c 3 11 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.49 0c 3 1 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.48 0c 3 0 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.54 0c 3 6 FC:A - ATA 7200 847555/1735794176 847827/1736350304
data 0c.60 0c 3 12 FC:A - ATA 7200 847555/1735794176 847827/1736350304
Spare disks
RAID Disk Device HA SHELF BAY CHAN Pool Type RPM Used (MB/blks) Phys (MB/blks)
--------- ------ ------------- ---- ---- ---- ----- -------------- --------------
Spare disks for block or zoned checksum traditional volumes or aggregates
spare 0b.29 0b 1 13 FC:B - ATA 7200 423111/866531584 423889/868126304
spare 0b.45 0b 2 13 FC:B - ATA 7200 423111/866531584 423889/868126304
spare 0c.61 0c 3 13 FC:A - ATA 7200 847555/1735794176 847827/1736350304
filer001>
======
fler001> sysconfig -V
volume aggr0 (1 RAID group):
group 0: 11 disks
volume aggr1 (1 RAID group):
group 0: 15 disks
volume aggr2 (1 RAID group):
group 0: 13 disks
filer001>
Forgot to also ask what ontap version and controller model. Also all 32bit aggrs? I will go over the layout in a bit
Sent from my iPhone 4S
these are 32 bit aggregates, FAS3020 7.3.7 DOT
With 32-bit aggregates this is a reasonable setup although it may have made sense to combine aggr0 and aggr1 together since the same 500GB drives and more spindle I/O in that aggregate. Once created you can’t combine without destroying them so likely not an option. The separate aggr2 with 1TB drives makes sense to have a new aggr with all the same size drives.
For spares, I prefer 2 of each drive type…that way maintenance garage is used (where a failed drive is tested and put back in the spares pool if it passes diagnostics), but on a smaller system going with 1 does make sense. For 1TB you only have one spare so should not use that one. For 500GB you have 2 spares and I’d leave those alone too…although you could use one of those drives for aggr0 or aggr1, but you would have a single disk bottleneck once you add it…which depending on current I/O may affect performance. A perfstat or statit over time (“priv set advanced ; statit -b” then wait a while and “priv set advanced ; statit -enr”) will show current disk utilization and you can interpret what may happen with a single drive add… the old best practice was at least 3 drives at a time and now we follow adding a full raid group at a time. If you need to grow an aggr, it would be best to add the full raid group and not a single drive. I would keep the current layout as is but depends if you can get more disks and how desperate the situation is for space.
If i really need to add spare disk, let say 500GB how much data will be added to the current aggr1? sorry im no NetApp expert.
by the way here is the statit result
filer001*> statit -enr
Hostname: filer001 ID: 0101202867 Memory: 2048 MB
NetApp Release 7.3.7: Thu May 3 03:56:11 PDT 2012
<8O>
Start time: Fri Sep 14 06:09:53 PHT 2012
CPU Statistics
369.083347 time (seconds) 100 %
285.440750 system time 77 %
8.451408 rupt time 2 % (4835644 rupts x 2 usec/rupt)
276.989342 non-rupt system time 75 %
452.725942 idle time 123 %
309.793971 time in CP 84 % 100 %
6.925683 rupt time in CP 2 % (3846202 rupts x 2 usec/rupt)
Multiprocessor Statistics
cpu0 cpu1 total
sk switches 16045076 4415501 20460577
hard switches 9774735 2591846 12366581
domain switches 30750 18492 49242
CP rupts 3533860 312342 3846202
nonCP rupts 929695 59747 989442
IPI rupts 1642 2930 4572
grab kahuna 14 7 21
grab w_xcleaner 58393 29891 88284
grab kahuna usec 2529 3110 5639
grab w_xcleaner usec 16247514 13569582 29817096
CP rupt usec 5930721 994962 6925683
nonCP rupt usec 1366298 159427 1525725
idle 191120184 261605757 452725942
kahuna 67861337 43114777 110976115
storage 18308425 9829725 28138150
exempt 12950985 16071951 29022937
raid 30240509 20510934 50751443
target 5426 4894 10321
netcache 0 0 0
netcache2 0 0 0
cifs 55722 51037 106760
wafl_exempt 0 0 0
wafl_xcleaner 0 0 0
sm_exempt 12253 13624 25878
cluster 0 0 0
protocol 0 0 0
nwk_exclusive 0 0 0
nwk_exempt 0 0 0
nwk_legacy 41231482 16726254 57957736
nwk_ctx1 0 0 0
nwk_ctx2 0 0 0
nwk_ctx3 0 0 0
nwk_ctx4 0 0 0
204.958101 seconds with one or more CPUs active ( 56%)
129.425114 seconds with one CPU active ( 35%)
75.532987 seconds with both CPUs active ( 20%)
Domain Utilization of Shared Domains
0 idle 0 kahuna
0 storage 0 exempt
0 raid 0 target
0 netcache 0 netcache2
0 cifs 0 wafl_exempt
0 wafl_xcleaner 0 sm_exempt
0 cluster 0 protocol
0 nwk_exclusive 0 nwk_exempt
0 nwk_legacy 0 nwk_ctx1
0 nwk_ctx2 0 nwk_ctx3
0 nwk_ctx4
CSMP Domain Switches
From\To idle kahuna storage exempt raid target netcache netcache2 cifs wafl_exempt wafl_xcleaner sm_exempt cluster protocol nwk_exclusive nwk_exempt nwk_legacy nwk_ctx1 nwk_ctx2 nwk_ctx3 nwk_ctx4
idle 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
kahuna 0 0 598 0 1047 136 0 0 3143 0 0 0 0 0 0 0 15077 0 0 0 0
storage 0 598 0 0 4620 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
exempt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
raid 0 1047 4620 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
target 0 136 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
netcache 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
netcache2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
cifs 0 3143 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
wafl_exempt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
wafl_xcleaner 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
sm_exempt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
cluster 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
protocol 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_exclusive 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_exempt 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_legacy 0 15077 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_ctx1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_ctx2 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_ctx3 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
nwk_ctx4 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
Miscellaneous Statistics
12366581 hard context switches 525658 NFS operations
410 CIFS operations 0 HTTP operations
0 NetCache URLs 0 streaming packets
7435651 network KB received 7322367 network KB transmitted
11942136 disk KB read 10947460 disk KB written
6919387 NVRAM KB written 0 nolog KB written
1691881 WAFL bufs given to clients 0 checksum cache hits
1691542 no checksum - partial buffer 0 FCP operations
0 iSCSI operations
WAFL Statistics
4704 name cache hits 523 name cache misses
42386334 buf hash hits 12066605 buf hash misses
697938 inode cache hits 4 inode cache misses
8263392 buf cache hits 349463 buf cache misses
83295 blocks read 1865643 blocks read-ahead
213863 chains read-ahead 33087 dummy reads
1606364 blocks speculative read-ahead 2088468 blocks written
11654 stripes written 0 blocks over-written
0 wafl_timer generated CP 0 snapshot generated CP
0 wafl_avail_bufs generated CP 76 dirty_blk_cnt generated CP
0 full NV-log generated CP 2 back-to-back CP
0 flush generated CP 0 sync generated CP
0 wafl_avail_vbufs generated CP 0 deferred back-to-back CP
0 container-indirect-pin CP 0 low mbufs generated CP
15 low datavecs generated CP 1103373 non-restart messages
24297 IOWAIT suspends 10988 next nvlog nearly full msecs
18253 dirty buffer susp msecs 0 nvlog full susp msecs
391458 buffers
RAID Statistics
546668 xors 0 long dispatches [0]
0 long consumed [0] 0 long consumed hipri [0]
0 long low priority [0] 0 long high priority [0]
0 long monitor tics [0] 0 long monitor clears [0]
0 long dispatches [1] 0 long consumed [1]
0 long consumed hipri [1] 0 long low priority [1]
0 long high priority [1] 0 long monitor tics [1]
0 long monitor clears [1] 18 max batch
7872 blocked mode xor 126415 timed mode xor
1406 fast adjustments 826 slow adjustments
0 avg batch start 0 avg stripe/msec
12174 tetrises written 0 master tetrises
0 slave tetrises 326597 stripes written
219536 partial stripes 107061 full stripes
2080066 blocks written 898151 blocks read
1077 1 blocks per stripe size 9 479 2 blocks per stripe size 9
480 3 blocks per stripe size 9 666 4 blocks per stripe size 9
866 5 blocks per stripe size 9 1482 6 blocks per stripe size 9
3050 7 blocks per stripe size 9 11035 8 blocks per stripe size 9
99177 9 blocks per stripe size 9 24090 1 blocks per stripe size 11
22872 2 blocks per stripe size 11 23535 3 blocks per stripe size 11
23579 4 blocks per stripe size 11 22079 5 blocks per stripe size 11
20151 6 blocks per stripe size 11 18496 7 blocks per stripe size 11
15880 8 blocks per stripe size 11 13439 9 blocks per stripe size 11
11257 10 blocks per stripe size 11 7882 11 blocks per stripe size 11
1898 1 blocks per stripe size 13 805 2 blocks per stripe size 13
647 3 blocks per stripe size 13 469 4 blocks per stripe size 13
307 5 blocks per stripe size 13 265 6 blocks per stripe size 13
224 7 blocks per stripe size 13 184 8 blocks per stripe size 13
115 9 blocks per stripe size 13 68 10 blocks per stripe size 13
32 11 blocks per stripe size 13 9 12 blocks per stripe size 13
2 13 blocks per stripe size 13
Network Interface Statistics
iface side bytes packets multicasts errors collisions pkt drops
e0a recv 6966 84 0 0 0
xmit 2604 62 62 0 0
e0b recv 7664611 10846 0 0 0
xmit 1985894 5769 62 0 0
e0c recv 7606431912 7715430 0 0 0
xmit 7496113306 7481754 63 0 0
e0d recv 3968 62 0 0 0
xmit 2604 62 62 0 0
vh recv 0 0 0 0 0
xmit 0 0 0 0 0
Single recv 7670159 10925 5272 0 0
xmit 1988072 5828 124 0 0
vif1 recv 7593873116 7709940 127 0 0
xmit 7506890328 7483923 125 0 0
Disk Statistics
ut% is the percent of time the disk was busy.
xfers is the number of data-transfer commands issued.
xfers = ureads + writes + cpreads + greads + gwrites
chain is the average number of 4K blocks per command.
usecs is the average disk round-trip time per 4K block.
disk ut% xfers ureads--chain-usecs writes--chain-usecs cpreads-chain-usecs greads--chain-usecs gwrites-chain-usecs
/aggr0/plex0/rg0:
0b.16 14 8685 201 1.18 23996 7953 14.90 1063 531 4.43 1451 0 .... . 0 .... .
0a.32 15 8865 191 1.19 53040 8150 14.59 1125 524 4.25 1533 0 .... . 0 .... .
0a.19 50 23117 13787 1.01 35692 8190 13.86 2515 1140 4.10 6521 0 .... . 0 .... .
0a.33 49 22447 13408 1.02 35952 7985 14.30 2496 1054 4.25 6622 0 .... . 0 .... .
0a.18 49 22746 13608 1.02 35173 8022 14.22 2523 1116 4.14 7054 0 .... . 0 .... .
0b.34 48 22387 13333 1.02 34592 8007 14.27 2447 1047 3.93 6935 0 .... . 0 .... .
0a.42 48 22375 13301 1.03 34217 7968 14.26 2478 1106 4.28 6021 0 .... . 0 .... .
0b.27 49 22686 13619 1.02 35421 7967 14.27 2518 1100 4.35 6251 0 .... . 0 .... .
0a.43 49 22394 13207 1.02 35615 8007 14.12 2566 1180 4.72 5970 0 .... . 0 .... .
0b.28 49 22583 13445 1.02 34425 8000 14.23 2425 1138 4.04 7097 0 .... . 0 .... .
0a.44 48 22269 13081 1.02 34675 7935 14.22 2486 1253 4.77 5796 0 .... . 0 .... .
/aggr1/plex0/rg0:
0a.35 4 2067 194 1.00 28278 1120 4.49 3283 753 6.26 1139 0 .... . 0 .... .
0a.17 6 2205 186 1.00 66086 1305 4.13 3402 714 6.25 1407 0 .... . 0 .... .
0b.36 3 1356 454 1.02 14931 561 2.49 5256 341 4.21 1310 0 .... . 0 .... .
0a.20 2 902 88 1.10 23021 417 3.29 6563 397 4.30 1214 0 .... . 0 .... .
0b.37 2 853 98 1.00 24582 369 3.26 6017 386 3.81 1496 0 .... . 0 .... .
0a.21 2 840 97 1.04 22950 367 3.28 6217 376 4.06 1353 0 .... . 0 .... .
0a.22 2 851 87 1.02 14831 376 3.09 6469 388 3.92 1338 0 .... . 0 .... .
0a.38 2 825 98 1.00 19224 382 3.37 5568 345 4.47 1086 0 .... . 0 .... .
0a.23 2 881 110 1.06 20017 382 2.88 6701 389 4.23 1268 0 .... . 0 .... .
0b.39 2 841 92 1.04 20104 380 2.83 7223 369 4.12 1478 0 .... . 0 .... .
0b.24 3 831 90 1.00 22011 369 3.20 6405 372 3.96 1322 0 .... . 0 .... .
0b.40 2 846 80 1.14 21110 381 3.20 6303 385 4.49 1231 0 .... . 0 .... .
0b.25 2 787 85 1.00 21788 335 3.29 6798 367 3.86 1571 0 .... . 0 .... .
0b.41 2 863 91 1.04 25411 381 3.28 6665 391 4.13 1353 0 .... . 0 .... .
0b.26 2 850 94 1.04 25051 380 3.01 7076 376 4.46 1073 0 .... . 0 .... .
/aggr2/plex0/rg0:
0c.50 31 26838 188 1.00 32287 16252 12.51 1221 10398 7.53 869 0 .... . 0 .... .
0c.58 33 27042 186 1.00 75973 16469 12.37 1264 10387 7.52 1010 0 .... . 0 .... .
0c.57 84 48507 26439 6.39 6247 11314 8.23 4289 10754 6.50 3552 0 .... . 0 .... .
0c.56 83 48576 26433 6.43 6231 11342 8.34 4297 10801 6.46 3559 0 .... . 0 .... .
0c.55 84 49043 26843 6.37 6276 11402 8.43 4288 10798 6.47 3462 0 .... . 0 .... .
0c.53 83 48260 26330 6.43 6173 11211 8.49 4209 10719 6.52 3585 0 .... . 0 .... .
0c.52 83 48871 26607 6.38 6192 11558 8.34 4237 10706 6.44 3661 0 .... . 0 .... .
0c.51 83 48663 26735 6.39 6166 11150 8.43 4244 10778 6.53 3545 0 .... . 0 .... .
0c.59 84 48707 26626 6.42 6191 11290 8.30 4372 10791 6.45 3643 0 .... . 0 .... .
0c.49 83 48373 26429 6.40 6145 11214 8.49 4188 10730 6.48 3585 0 .... . 0 .... .
0c.48 83 48232 26037 6.41 6188 11477 8.41 4273 10718 6.38 3663 0 .... . 0 .... .
0c.54 82 48038 25990 6.28 6357 11208 8.34 4218 10840 6.49 3484 0 .... . 0 .... .
0c.60 84 48801 26611 6.34 6303 11297 8.39 4379 10893 6.48 3668 0 .... . 0 .... .
Aggregate statistics:
Minimum 2 787 80 335 341 0 0
Mean 38 21135 10630 6483 4021 0 0
Maximum 84 49043 26843 16469 10893 0 0
Spares and other disks:
0c.61 0 0 0 .... . 0 .... . 0 .... . 0 .... . 0 .... .
Spares and other disks:
0b.29 0 0 0 .... . 0 .... . 0 .... . 0 .... . 0 .... .
Spares and other disks:
0b.45 0 0 0 .... . 0 .... . 0 .... . 0 .... . 0 .... .
FCP Statistics
0 FCP Bytes recv 0 FCP Bytes sent
0 FCP ops
iSCSI Statistics
0 iSCSI Bytes recv 0 iSCSI Bytes xmit
0 iSCSI ops
Interrupt Statistics
738305 Clock (IRQ 0) 50 Uart (IRQ 4)
84945 PCA Intr (IRQ 11) 3224863 Gigabit Ethernet (IRQ 48)
126 Gigabit Ethernet (IRQ 49) 558064 FCAL (IRQ 52)
2394 Gigabit Ethernet (IRQ 97) 9790 Gigabit Ethernet (IRQ 98)
135882 FCAL (IRQ 101) 76653 FCAL (IRQ 102)
0 RTC 4572 IPI
4835644 total
NVRAM Statistics
8771809 total dma transfer KB 6856088 wafl write req data KB
222006 dma transactions 1129392 dma destriptors
5243328 waitdone preempts 956614 waitdone delays
0 transactions not queued 222006 transactions queued
222006 transactions done 39766 total waittime (MS)
269491 completion wakeups 257624 nvdma completion wakeups
140203 nvdma completion waitdone 6920256 total nvlog KB
0 nvlog shadow header array full 0 channel1 dma transfer KB
0 channel1 dma transactions 0 channel1 dma descriptors
NFS Detail Statistics
Server rpc:
TCP:
calls badcalls nullrecv badlen xdrcall
525693 0 0 0 0
UDP:
calls badcalls nullrecv badlen xdrcall
0 0 0 0 0
IPv4:
calls badcalls nullrecv badlen xdrcall
525693 0 0 0 0
IPv6:
calls badcalls nullrecv badlen xdrcall
0 0 0 0 0
Server nfs:
calls badcalls
525659 0
Server nfs V2: (0 calls)
null getattr setattr root lookup readlink read
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
wrcache write create remove rename link symlink
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
mkdir rmdir readdir statfs
0 0% 0 0% 0 0% 0 0%
Read request stats (version 2)
0-511 512-1023 1K-2047 2K-4095 4K-8191 8K-16383 16K-32767 32K-65535 64K-131071 > 131071
0 0 0 0 0 0 0 0 0 0
Write request stats (version 2)
0-511 512-1023 1K-2047 2K-4095 4K-8191 8K-16383 16K-32767 32K-65535 64K-131071 > 131071
49 76 29639 71491 130643 0 0 0 0 0
Server nfs V3: (525659 calls)
null getattr setattr lookup access readlink read
0 0% 50168 10% 145 0% 2509 0% 32848 6% 0 0% 218476 42%
write create mkdir symlink mknod remove rmdir
221222 42% 4 0% 0 0% 0 0% 0 0% 0 0% 0 0%
rename link readdir readdir+ fsstat fsinfo pathconf
0 0% 0 0% 0 0% 0 0% 287 0% 0 0% 0 0%
commit
0 0%
Read request stats (version 3)
0-511 512-1023 1K-2047 2K-4095 4K-8191 8K-16383 16K-32767 32K-65535 64K-131071 > 131071
11635 1638 18372 11719 51868171 3294236 4712579 2537897507 2286 0
Write request stats (version 3)
0-511 512-1023 1K-2047 2K-4095 4K-8191 8K-16383 16K-32767 32K-65535 64K-131071 > 131071
331807 5005226 36121655 17556380 7282066 9080609 52320807 1264075760 5054 0
Misaligned Read request stats
BIN-0 BIN-1 BIN-2 BIN-3 BIN-4 BIN-5 BIN-6 BIN-7
2597516934 0 0 0 0 0 0 0
Misaligned Write request stats
BIN-0 BIN-1 BIN-2 BIN-3 BIN-4 BIN-5 BIN-6 BIN-7
1289760205 204268 228737 206016 206647 203642 209687 208103
NFS V2 non-blocking request statistics:
null getattr setattr root lookup readlink read
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
wrcache write create remove rename link symlink
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
mkdir rmdir readdir statfs
0 0% 0 0% 0 0% 0 0%
NFS V3 non-blocking request statistics:
null getattr setattr lookup access readlink read
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
write create mkdir symlink mknod remove rmdir
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
rename link readdir readdir+ fsstat fsinfo pathconf
0 0% 0 0% 0 0% 0 0% 0 0% 0 0% 0 0%
NFS reply cache statistics:
TCP:
InProg hits Misses Cache hits False hits
0 221371 21 2
UDP:
In progress Misses Cache hits False hits
0 0 0 0
filer001*>
About 360GB if you have 5% aggr reserve.
Sent from my iPhone 4S
Your disk utilization is really low. At least during this sample.
Sent from my iPhone 4S
how I can see that the disk utilization is really low? from the statit output.
Ut% output
Sent from my iPhone 4S
thanks.
If i need to add this this disk "spare 0b.45 0b 2 13 FC:B - ATA 7200 423111/866531584 423889/868126304"
the command should be this one right?
aggr add aggr1 -d 0b.45
Correct. Not ideal to add 1 disk but that will add it.
Sent from my iPhone 4S
norman..
i would say when you add a single disk to an existing aggr.. do it over weekend or when you a huge change window.. so that you have enough time when you do reallocation...
if i would be in your place.. in this case i usually migrate volumes one by one which are eating up space in aggr1 to a new aggr or other aggr which are in low usage.
think about in this way.. if you are adding a 300gb or 450gb disk to aggr1.. think in how many days you will be filling up that 300gb space ( may be sooner. and again you need to add few more disks..) so as a best practice if you are adding disks to existing aggr then add them in a pile if not create a new aggr with 'n' disks so that you wont be hitting any perf issues in future...
thanks vijay, we dont push this activity instead migrated the data to another volume.
Hi Norman,
I am fairly certain at this point you are running out of disk I/O capacity on aggr1 and aggr2. (need more spindles)
this would be affecting you in increased latency which I believe you are experiencing.
with that said, I agree with Scott and Vijay, you can add the disk, just reallocate at the volume level afterwards.
thanks all, we dont push this activity instead migrated the data to another volume.