Each controller (FAS6280) of our Metrocluster has one 2TB Flash Cache, one SSD Aggregate (18 Disk, no FlashPool) and one SAS Aggregate (180 Disk).
The results of different performance tests (NetApp SIO) are showing that the SAS Disks are as fast or faster as the SSD. Is this possible? I thought that the performance on SSD should be much better than SAS.
Is it possible that the Flash Cache is serving the read requests also for SSD? Can I disable the Flash Cache on the SSD Aggregate?
Thanks for your help.
While testing, monitor the usage of FlashCache with this command:
stats show -p flexscale-access
and you will see if you are serving data from the FlashCache. Please also keep in mind the size of the aggregates. You are comparing 18 disk aggregate against 180 disks aggregate and you are getting the same performance. That is the usage of the SSD
I didn't think about monitoring the FlashCache while testing, I should do it as soon as possible.
Concerning the size of the aggregates, you're right 18 disk are less than 180 disk. But if I compare the IOPS of both disk types, SSD (50000 per disk) SAS (175 per disk), then I should be a lot more faster on the SSD aggregate.
Unfortunately I don't have access to the Storage Performance Modeler. I did one more test today, I could verify that data are not read from Flash Cache on the SSDs. Last week I had an average IOPS of 50000, today I reached 82000 with the same number of disks.
Are there any updates on this topic?
We have experienced the same when in using SSD (8 SSD disk, raid-dp) aggr, and SAS (110 disks )& SSD (1T flash pool) hybrid aggr, the perofmrance about the same as said by the poster. Any explainations on why please?
Workloads differ of course, but that 50000 IOPS/SSD is unrealistic. We went through the Service Design Workshop with NetApp, and their data shows 4272 IOPS/SSD for the calculations. Also keep in mind what the controller itself can handle. I'd be willing to bet you're seeing peak SSD performance possible with the FAS6280, whereas with the SAS+FlashPool ends up having less overhead to handle the workload. This is why some of the startup flash vendors are in trouble - if their controllers don't scale to handle the 10 and 15TB SSDs about to start shipping, their IOPS/TB is going to be terrible (in comparison to using smaller SSDs or scale-out with the larger ones).
I have a brand new DS2246 shelf of 400GB SSD drives. With no workloads on there, I should be able to get some great numbers.
As I am running multiple tests, over various protocols - CIFS, iSCSI, NFS - I am getting near similar results for R/W speeds and througput. I would expect to be seeing better coming from SSD.
This should not be the case - I suggest you log a support ticket, then contact your local sales team to engage performance specialists (they will need the ticket number to reference). If the system was partner installed, also engage them.
Quick things to look at are to ensure your fiber bridges are up to date (we released a new firmware - v2.85 - for the ATTO 7500s pretty recently), you are running ONTAP 9.1P9 on that platform, and that your FC fabrics and switches (including buffer credits) are set correctly.
Best of luck for an easy resolution.