Another thing that is not well documented is the different size - but easy to explain. the larger drives have double the amount of data on them. so they getting more read requests. (but i'm not sure how the write is distribute).
As you can see. your environment not comply with it....
As for how to resolve all of this. i think that you can't without destroying you main aggregate. but i don't know if you have spare capacity to move the data to. .
The bit of funny thing is that i don't think you will make it worse if you just add the 5 drives to the existing aggregate - you would likely actually make it better as you remove what i believe is the current bottleneck of hot raid-group. you better validate it with statit -b and statit -e on peak time to see if it's indeed the bottleneck.
Great answer from Gidon - this is indeed outside of what we would typically suggest. But you're here now..
Just a few minor points - as of ONTAP 9, we now say that the smallest RAID groups can be as small as half the size of other RAID groups (https://library.netapp.com/ecm/ecm_download_file/ECMLP2496263 - page 72), and as 15K drives are no longer available, we have had to supply 10K 600GB drives in LFF carriers as replacements for some systems, and have not had performance issues from it, so don't design to mix RPMs of similar disks, but it's not the end of the world if you have to.
If this were my system, I would also lean to add the additional 1.8TB SAS drives to the existing RAID group - run it with the -simulate option first to ensure it adds to the correct RAID group and aggregate. Mostly because it is a smaller FAS2552 system and you are likely to be using it for general purpose NAS, aiming for capacity over performance. In any other scenario I would create a new aggregate and try to move data around to destory the improper aggregate and move its 1.8TB disks into the newer one.
The concerns on mis-matched capacity disks are due to IO density - you have essentially (but not exactly) the same number of IOPS to support calls to double the amount of data, which means streaming data in or out may have inefficiencies, but since I'm guessing the first three 1.8T drives were added for capacity, there is likely to be a strong affinity for existing data to the smaller sized drives and newer data to the larger drive RAID group, but seperating it into different aggregates would be best.
Thank you for your replies guys. They have been very helpful.
I have another question:
- Can I delete the existing raid group with these 3 x 1.8TB drives on the existing aggregate and create another new aggregate and a new raid group where I can use these three disks and the new 5 x 1.8TB other disks ?
- If yes will it have any impact on the existing aggregate except the data deletion ?
These 5 x 1.8TB new disks will be used for a file server using CIFS protocol.
New files will tend to be written in more empty disks, but it is not guaranteed. If you feel a given volume is not improving in performance, you may run a volume reallocate to ensure it is equally spread over all drives, but this is a slow, low priority process that may take several weeks (or more!)