ONTAP Discussions

Aggregate performance issue after adding disks

mkopenski
3,881 Views

After adding 4 disks to an aggregate one of them is at a constant 100% utilization, currently running a reallocate.

The question is why is only that one disk at 100% while the other ones that were added ~25% along with the rest of the disks in the raid group?

Current status of the aggregate is redirect, which is expected I guess.

Any suggestions?

7 REPLIES 7

mkopenski
3,882 Views

Here are the stats of that aggregate from statit - 1b.89 is at 100 , 1b.88 2c.29 and 2b.59 were the other disks that were added

/aggr1/plex0/rg0:
4c.60              4  11.76    0.33   1.00  9722   5.44  26.41   243   5.99   8.93   451   0.00   ....     .   0.00   ....     .
4c.61              4  12.08    0.33   1.00 22556   5.81  24.85   220   5.94   8.68   486   0.00   ....     .   0.00   ....     .
4c.45             26  55.48   44.41   2.92  5229   4.01  13.32  1123   7.07   9.08  1141   0.00   ....     .   0.00   ....     .
4c.49             22  50.59   39.62   2.97  4436   3.51  13.93   931   7.46   9.40   848   0.00   ....     .   0.00   ....     .
4c.50             24  54.13   43.24   2.93  4607   3.49  14.91   872   7.40   9.33  1164   0.00   ....     .   0.00   ....     .
2c.51             24  53.83   42.98   2.87  4695   3.32  13.49   812   7.53  10.01   996   0.00   ....     .   0.00   ....     .
4c.52             24  55.43   44.26   2.73  5159   3.49  13.96   859   7.68   9.03   990   0.00   ....     .   0.00   ....     .
4c.53             24  53.39   42.22   3.01  4516   3.60  14.21   911   7.57   9.28  1014   0.00   ....     .   0.00   ....     .
4c.54             23  50.81   39.66   2.93  4677   3.58  13.87   947   7.57   9.73   993   0.00   ....     .   0.00   ....     .
2c.55             23  52.12   41.37   3.04  4380   3.43  13.41   885   7.33   9.89  1066   0.00   ....     .   0.00   ....     .
2c.56             24  54.28   43.05   2.97  4684   3.97  14.16   917   7.25   9.33   944   0.00   ....     .   0.00   ....     .
4c.57             25  56.02   44.82   2.96  4536   3.75  13.78   959   7.46   9.40   974   0.00   ....     .   0.00   ....     .
2c.58             23  52.35   41.92   2.89  4758   3.30  13.37   929   7.12  10.37   906   0.00   ....     .   0.00   ....     .
2c.59             25  55.65   44.45   3.10  4551   3.73  15.05   925   7.48   8.68  1190   0.00   ....     .   0.00   ....     .
2c.48             23  50.90   40.10   2.51  5662   3.71  16.82   683   7.09   8.96  1147   0.00   ....     .   0.00   ....     .
2b.87             23  48.75   38.95   2.17  6092   4.34  28.68   338   5.45   8.01   561   0.00   ....     .   0.00   ....     .
2b.59             21  47.78   38.14   2.10  5910   4.60  29.73   368   5.05   9.32   721   0.00   ....     .   0.00   ....     .
2c.29             22  47.01   37.34   2.22  5791   4.64  29.49   384   5.03   9.47  1143   0.00   ....     .   0.00   ....     .
1b.88             22  47.84   38.23   2.09  6850   4.56  29.97   296   5.05   9.27   495   0.00   ....     .   0.00   ....     .
1b.89            100  47.17   37.53   1.93 79596   4.60  29.81  5836   5.05   9.14 23635   0.00   ....     .   0.00   ....     .

cedric_renauld
3,882 Views

Hi Mark

So the 1b.89 disk ins't the Parity drive ?

Cf with sysconfig -r ... Maybee

mkopenski
3,882 Views

It is a data disk, none of the other disks are showing any issues like

this

data 1b.89 1b 5 9 FC:B - FCAL 15000

272000/557056000 280104/573653840

mkopenski
3,882 Views

Support had me fail a disk, which leveled the io across the raid group

amiller_1
3,882 Views

That's definitely one approach -- if happens in the future I'd look at the reallocate command as well (at the physical level).

mkopenski
3,882 Views

Did start a reallocate after adding the disks, was going very slow

The suggestion to prevent this in the future was - simplified

1. Try to free up space in aggregate <90% and clean up snapshots - This one was at 91.3%

2. Run reallocate before adding disks.

3. Add less disks at a time

4. Run reallocate after adding disks

amiller_1
3,882 Views

Very helpful to know what worked in real life....thanks for posting back.

Public