ONTAP Hardware

RAID4 TO RAID_DP doubt

SUPORTE_MAMIRAUA
5,364 Views

Hi everyone my name is Gustavo,

I'm new to Netapp administration, so I'm learning some things here....thanks in advance to everyone!

Well, i have one FAS2220, one of the controllers was in RAID 4 mode, so i had to convert it to RAID_DP, this was done, but i have a degraded Raid_DP in both controllers.

I guess it is because the parity disks, i need some help to configure the parity disks but i cant find how.....

Can anyone give me some help???

Follow some information about....

[matfesto0102:monitor.raiddp.vol.singleDegraded:warning]: dparity disk in RAID group /aggr0/plex0/rg0 is broken.

[matfesto0103:monitor.raiddp.vol.singleDegraded:warning]: dparity disk in RAID group /aggr0/plex0/rg0 is broken.

matfesto0102> aggr status -r

Aggregate aggr0 (online, raid_dp, degraded) (block checksums)

  Plex /aggr0/plex0 (online, normal, active, pool0)

    RAID group /aggr0/plex0/rg0 (degraded, block checksums)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      dparity   FAILED                  N/A                        560000/ -

      parity    0a.00.2         0a    0   2   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.4         0a    0   4   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.6         0a    0   6   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.8         0a    0   8   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.0         0a    0   0   SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.10        0a    0   10  SA:A   0   SAS 10000 560000/1146880000 572325/1172123568

Pool1 spare disks (empty)

Pool0 spare disks (empty)

Partner disks

RAID Disk       Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------       ------          ------------- ---- ---- ---- ----- --------------    --------------

partner         0b.00.5         0b    0   5   SA:B   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.3         0b    0   3   SA:B   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.9         0b    0   9   SA:B   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.11        0b    0   11  SA:B   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.1         0b    0   1   SA:B   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.7         0b    0   7   SA:B   0   SAS 10000 0/0               572325/1172123568

matfesto0103> aggr status -r

Aggregate aggr0 (online, raid_dp, degraded) (block checksums)

  Plex /aggr0/plex0 (online, normal, active, pool0)

    RAID group /aggr0/plex0/rg0 (degraded, block checksums)

      RAID Disk Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

      --------- ------          ------------- ---- ---- ---- ----- --------------    --------------

      dparity   FAILED                  N/A                        560000/ -

      parity    0a.00.3         0a    0   3   SA:B   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.5         0a    0   5   SA:B   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.7         0a    0   7   SA:B   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.9         0a    0   9   SA:B   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.1         0a    0   1   SA:B   0   SAS 10000 560000/1146880000 572325/1172123568

      data      0a.00.11        0a    0   11  SA:B   0   SAS 10000 560000/1146880000 572325/1172123568

Pool1 spare disks (empty)

Pool0 spare disks (empty)

Partner disks

RAID Disk       Device          HA  SHELF BAY CHAN Pool Type  RPM  Used (MB/blks)    Phys (MB/blks)

---------       ------          ------------- ---- ---- ---- ----- --------------    --------------

partner         0b.00.6         0b    0   6   SA:A   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.2         0b    0   2   SA:A   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.8         0b    0   8   SA:A   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.0         0b    0   0   SA:A   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.4         0b    0   4   SA:A   0   SAS 10000 0/0               572325/1172123568

partner         0b.00.10        0b    0   10  SA:A   0   SAS 10000 0/0               572325/1172123568

7 REPLIES 7

JGPSHNTAP
5,364 Views

Dude, you have no spares..

SUPORTE_MAMIRAUA
5,364 Views

Do you have any doc that can help me how to do it?

JGPSHNTAP
5,364 Views

What are you trying to do.

You have two failed parity disks it looks like and no spares.. Do you have any unknown disks

disk show -n

aggr status -s

aggr status -f

You don't need docs to understand failed drives.. Are you managing live data on that cluster?

SUPORTE_MAMIRAUA
5,364 Views

There is no data on this cluster anymore.

I'm trying to reconfigure it to use raid_dp, it was configured to RAID4.

I tried to destroy the aggr and rebuild it, but it says that i have flexible volumes on it....when trying to destroy the volume it says that i cannot destroy vol0 because it is a root vol.

matfesto0102> aggr status -s

Pool1 spare disks (empty)

Pool0 spare disks (empty)

matfesto0102> aggr status -f

Broken disks (empty)

JGPSHNTAP
5,364 Views

Ok, you need to offline all the volumes and destroy all the aggregates and then zero the spares and start over..

Boot into maintenence mode and start over

But you need to learn the basics of netapp and it doesn't sound like you have.. Start with the WBT

SUPORTE_MAMIRAUA
5,364 Views

No i've never worked with netapp before, just Dell storages.....

Tks for now man! I'll take a look in WBT

DOMINIC_WYSS
5,364 Views

I think you added the spare disk to the aggregates and switched to raiddp afterwards.

so it ended without spares and failed (non existing) dparity disks.

the system was configured with raid4 because it has a low disk count (only 12).

you will need two spares (one per controller) and as you want raiddp,  you loose two other disks per controller.

so basically you end up with 6 lost disks and only 3 data disks left per controller. this will be slow as hell !

I recommend to use the second controller only for HA failover with a raid4 and one spare (or even zero spares, but then you need to set options raid.min_spare_count 0).

and then give all the other disks to the first controller, with one spare, so you'll have at least 6 data disks on this head. which means double the performance of the old config.

disk spindle count is more important than CPU in this situation.

the easiest way to reset is to hit ctrl+c into the boot menu and select 4, which will zero all disks and create an aggr with only three disks (do it on both heads).

after that, you can distribute the disks as you like (with "disk remove_ownership" and "disk assign") and size the aggregates (aggr add -d).


Public