Subscribe

Raid group

On the FAS2650, raid-dp has a limitation of 14 disks.

 

 

The professional installer had one aggregate with 20 disks which belong to two raid groups.

 

rg0 group has 14 disks

rg1  group has 6 disks.

 

Would it be better if i adjust both groups to be equally, like 10 disks each per raid group?  If I made the adjustment, would data be lost?

 

 

My other aggreate (2nd) has 13 disks which belong to one raid group.

 

 

Thanks,

SVO

Re: Raid group

Hi,

 

Please note the following limitations w.r.t raidgroups:

You change the size of RAID groups on a per-aggregate basis. You cannot change the size of individual RAID groups. The following list outlines some facts about changing the RAID group size for an aggregate: • If you increase the RAID group size, more disks or array LUNs will be added to the most recently created RAID group until it reaches the new size. • All other existing RAID groups in that aggregate remain the same size, unless you explicitly add disks to them. • You cannot decrease the size of already created RAID groups. • The new size applies to all subsequently created RAID groups in that aggregate.

Ref: https://library.netapp.com/ecm/ecm_download_file/ECMP1141781 page 116

 

Since you cannot decrease the size of raigdgroup already created, I recommend you can continue to use the system as is.

storage system performance is optimized when all RAID groups are full. You can add additional disks in the future to have more disks in the rg1 raidgroup.

please refer Considerations for sizing RAID groups for disks on page 107 in the link above.

If this post resolved your issue, help others by selecting ACCEPT AS SOLUTION or adding a KUDO.

Re: Raid group

Hi SVO,

 

I'm guessing by you stating that the FAS2650 is limited to 14disk RAID-DP that you are using 6 or 8TB drives...

 

FAS6250 RAID Sizes.JPG

Why not go for a RAID-TEC aggregate with a single RAID group of 20 disks, giving you 17 data drives? This also falls inline with best practices where you should avoid having any RAID group that is less than one half the size of other RAID groups in the same aggregate.

 

However, as Sahana states this is disruptive (i.e. you'll need to destroy the aggregate to reconfigured the RAID groups), however all data could be moved to the second aggregate (assuming you have space), which can be completed nondisruptively using the volume move command (https://library.netapp.com/ecm/ecm_download_file/ECMLP2496251).

 

Hope this helps.

 

Cheers,

Grant.

Re: Raid group

[ Edited ]

You are correct about the 8 tb drives. 

 

 

Here are the current drives of the NAS.

 

12 -900 GB, 10K drives (OS for now, 2 aggregates)

36- 8TB, 7.2K drives (data storage, 2 aggregates)

 

 

In terms of performance, is raid-tec the same as raid-dp?

 

I am leaning towards of having one large aggregate (raid-tec) instead of two aggregates for more spindles.  I have moved all the volumes to the 2nd aggregate.  Here is my plan.

 

1) already moved volumes to 2nd aggregate (aggr2)

 

2) wipe out 1st aggregate (aggr1)

 

3) create new aggregate (aggr1) with raid-tec and adjust the disk to 29 (even though only 22 available at this time).

 

4) move volumes back to aggr1

 

5) wipe out 2nd aggregate (aggr2)

 

6) add more disks to 1st aggregate (aggr1)

Re: Raid group

Hi, sorry for the delay in replying.

 

In terms of performance there should be no appreciable performance difference between the two. With larger RAID groups you can write in larger stripes that is always of benefit, however the reason RAID-TEC was introduced is to help with risk introcued with the longer time these larger drives take to reconstruct. If a second disk were to fail in the same RAID group while the first was still being reconstructed, you could be in a situation where the multi-disk failure may take the aggregate offline. This risk used to minimised by making the RAID groups smaller; therefore less chance of a second disk failing, however less data disks means performance may suffer. RAID-TEC allows for 2 disks to fail, while at the same time allows for larger RAID groups to be used.

 

It is also best practice to have 2 spare drives for each disk type (excluding SSDs) to ensure Maintenance Centre (MC) is in operation. This will proactively fail a drive if errors are being reported and put it through the manufacturers diagnostics to confirm if is indeed about to fail. If it passes it is put back into the spaces pool, otherwise it is failed. Without MC the drive would be immediately failed and you would need to go throught the reconstruct process. Therefore with these large drvies I would suggest you ensure you have 2 spares to help minimise disk failures.

 

Bearing these in mind, for your 36*8TB drives, I would actually aim for 2 RAID-DP groups each of 17 disks. At the end of the process this will give you 2 RAID groups of exactly the same number of disks and with not too many spindles as to expose you to multiple disk failures, as well as the recommended 2 spares.

 

The only other thought is that with the movment of the volumes between the aggregates, the first RAID group of the 8TB drives will initially contain all your data. The second RAID group will be empty. This will not be too much of a problem if you are expecting a fair amount of data to be written or change, since all new blocks will be written to the new RAID group. Over a period of time the data will then start to level between the aggrgeates. There are two ways the layout of your data can be measured:

 

  • To view how the data layout is optimised use the volume reallocation measure command.
  • To view how the free space is optimised use the storage aggregate reallocation start command

Both are detailed in the ONTAP 9 Documentaion Command Man Pages (https://docs.netapp.com/ontap-9/index.jsp). Schedules can also be set to automate the optimisation of the layout.

 

The reason for not using RAID-TEC in this situation is that your second RAID group would have significantly less data disks than the first and since this is where all the writes would initially go, would likely adversly affect performance.

 

Otherwise your plan would make the best use of the disks, both for capacity and performance.

 

Hope this helps.

 

Thanks,

Grant.