ONTAP Hardware

FAS252 drive expansion

IlirS
6,860 Views

We have one storage sytem FAS2552 Dual-Controller with 24 hard disks 900GB SAS 10K and one disk shelf 2246 which contains 12 SSD disks 400GB.

With 23 SAS drives (1 drive for spare) we have created one aggregate. On this aggregate we have used a flash pool also with 4 SSD disks.

With the other 8 SSD remaining disks we have created another one aggregate.

 

So in total we have two aggregates, one with SAS drives (and flash pool) and another one with 8 SSD drives.

 

In the first aggregate we have created recently another raid group (raid-dp) with three disks of 1.8TB of size.

So the aggregate has different types of disks regarding their size. 

 

Now we want to add 5 x 1.8TB disks.

 

Which will be a better solution regarding the performance:

- To create another aggregate (a new one) with these 5 x 1.8TB disks or add them in the existing raid group on the first aggregate ?

 

- If we create another aggregate can the existing raid group with these 3 x 1.8TB move to this new aggregate?

 

Our concern is regarding the performance. Does a mixed disks (different sizes) aggregate has a lower or worse performance than a aggregate with same size disks?

 

These 5 x 1.8TB disks will be used for file server service.

 

Thank you 

Ilir

 

1 ACCEPTED SOLUTION

GidonMarcus
6,831 Views

Hi

 

A few statements first:

https://www.netapp.com/us/media/tr-3838.pdf

 

gewneral.png

 

 

Raid sizes (disk count difference)

raid size diffrent.png

 

https://www.netapp.com/us/media/tr-3437.pdf

drive speed.png

 

Another thing that is not well documented is the different size - but easy to explain. the larger drives have double the amount of data on them. so they getting more read requests. (but i'm not sure how the write is distribute).

 

 

 

As you can see. your environment not comply with it....

As for how to resolve all of this. i think that you can't without destroying you main aggregate. but i don't know if you have spare capacity to move the data to. .

 

The bit of funny thing is that i don't think you will make it worse if you just add the 5 drives to the existing aggregate - you would likely actually make it better as you remove what i believe is the current bottleneck of hot raid-group. you better validate it with statit -b and statit -e on peak time to see if it's indeed the bottleneck.

 

Gidi

 

 

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

View solution in original post

9 REPLIES 9

GidonMarcus
6,832 Views

Hi

 

A few statements first:

https://www.netapp.com/us/media/tr-3838.pdf

 

gewneral.png

 

 

Raid sizes (disk count difference)

raid size diffrent.png

 

https://www.netapp.com/us/media/tr-3437.pdf

drive speed.png

 

Another thing that is not well documented is the different size - but easy to explain. the larger drives have double the amount of data on them. so they getting more read requests. (but i'm not sure how the write is distribute).

 

 

 

As you can see. your environment not comply with it....

As for how to resolve all of this. i think that you can't without destroying you main aggregate. but i don't know if you have spare capacity to move the data to. .

 

The bit of funny thing is that i don't think you will make it worse if you just add the 5 drives to the existing aggregate - you would likely actually make it better as you remove what i believe is the current bottleneck of hot raid-group. you better validate it with statit -b and statit -e on peak time to see if it's indeed the bottleneck.

 

Gidi

 

 

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

AlexDawson
6,782 Views

Great answer from Gidon - this is indeed outside of what we would typically suggest. But you're here now..

 

Just a few minor points - as of ONTAP 9, we now say that the smallest RAID groups can be as small as half the size of other RAID groups (https://library.netapp.com/ecm/ecm_download_file/ECMLP2496263 - page 72), and as 15K drives are no longer available, we have had to supply 10K 600GB drives in LFF carriers as replacements for some systems, and have not had performance issues from it, so don't design to mix RPMs of similar disks, but it's not the end of the world if you have to.

 

If this were my system, I would also lean to add the additional 1.8TB SAS drives to the existing RAID group - run it with the -simulate option first to ensure it adds to the correct RAID group and aggregate. Mostly because it is a smaller FAS2552 system and you are likely to be using it for general purpose NAS, aiming for capacity over performance. In any other scenario I would create a new aggregate and try to move data around to destory the improper aggregate and move its 1.8TB disks into the newer one.

 

The concerns on mis-matched capacity disks are due to IO density - you have essentially (but not exactly) the same number of IOPS to support calls to double the amount of data, which means streaming data in or out may have inefficiencies, but since I'm guessing the first three 1.8T drives were added for capacity, there is likely to be a strong affinity for existing data to the smaller sized drives and newer data to the larger drive RAID group, but seperating it into different aggregates would be best.

 

 

IlirS
6,657 Views

Hello,

 

Thank you for your replies guys. They have been very helpful.

I have another question:

 

- Can I delete the existing raid group with these 3 x 1.8TB drives on the existing aggregate and create another new aggregate and a new raid group where I can use these three disks and the new 5 x 1.8TB other disks ? 

- If yes will it have any impact on the existing aggregate except the data deletion ?

 

These 5 x 1.8TB new disks will be used for a file server using CIFS protocol.

 

Best regards,

Ilir

GidonMarcus
6,647 Views

Hi

 

Disks cannot be remove from an aggregate once they are assigned to it (regardless of the raid group they are in). a disk can only be replaced with an equal or larger ones.

https://kb.netapp.com/app/answers/answer_view/a_id/1035381

 

if you happy to loss the data or can migrate it somewhere.-  destroy the AGGR.and you can create the new layout however you like.

 

Gidi

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

IlirS
6,555 Views

Hello guys,

 

Thank you again for your help.

 

We will plan to add these 5 x 1.8TB disks in the existing RG with 3 x 1.8TB. The data in this RG will be used for VMware datastore and for the CIFS protocol service on Netapp storage system.

 

- Will it be any big performance impact with this given configuration?

- Does anybody has any guide how to add this 5 disks in the existing configured RG ?

 

BR,

Ilir

GidonMarcus
6,526 Views

Hi

 

i don't think we had in this thread the version and mode you are on.

but i think in both On Command System Manager Add disks wizard will show you exactly what it's going to do.

from the command "aggr add" you can use the -g switch to specify the raid group in 7-mode, and -raidgroup in "aggr add-disks" in Cdot.

after you adding the disks. run reallocation to make sure the data is spread across all the spindles..

 

As for performance. there's no guarantee. as much as we seen the scenario in this trhead we believe that this itself will improve the performance.

 

Gidi

Gidi Marcus (Linkedin) - Storage and Microsoft technologies consultant - Hydro IT LTD - UK

IlirS
6,495 Views

Hello Guys,

 

Thank you for your helpful information provided.

 

Regards,

Ilir

IlirS
6,404 Views

Hello guys,

 

I recently added these 5 x 1.8TB disks to the existing Raid Group on the existing aggregate.

Now, my question is:

 

- Is it possible to link a new volumce which will be used for CIFS protocol service to this existing Raid group where the new disks have been added ?

We would like that new CIFS to be stored and written on these new disks, or it's not possible due to write anywhere file system ?

 

Is there any possibility to know on which disks the new data will be written and stored ?

 

Best regards,

Ilir

AlexDawson
6,362 Views

The "Write Anywhere" part of WAFL is literal.

 

New files will tend to be written in more empty disks, but it is not guaranteed. If you feel a given volume is not improving in performance, you may run a volume reallocate to ensure it is equally spread over all drives, but this is a slow, low priority process that may take several weeks (or more!)

Public