It's not clear what the source or assumptions are behind the IOPS per disk by type in your posting, because the values are close but different than what NetApp uses as data points for reference purposes.
Before going further, I must say that system sizing should NEVER be done in a 'back of the envelope' fashion by dividing IOPS/disk into a system-level IOPS throughput to arrive at the nubmer of disk drives needed. NetApp has far more comprehensive and accurate tools that SEs use to properly size systems and configurations to meet expected workloads with headroom for growth, non-application and unexpected/unknown loads.
Because of NetApp system architecture - FAS and V-Series alike - IOPS per disk is most useful only for small block (4KB and 8KB) random reads from disk. Note: It says nothing about the benefit of read cache for this workload, and cachable data sets reduce the number of spindles required. This factor is taken into account in the sizing tools mentioned already.
Random read IOPS/disk is a function of disk mechanical performance - rotational speed and seek time, and tiny bit on areal density - and interface bandwidth has effectively NO IMPACT. Therefore, a SAS v2 disk with interface bandwidth of 6Gb/sec is no more capable from an IOPS throughput standpoint than a 3Gb/sec SAS v1 disk. Random reads from disk can only consume a small fraction of the interface bandwidth. e.g. 200 IOPS of 8KB per IOP is only 1.56 MB/sec or 12.5 Mb/sec, much less than 1% of the SAS v1 bandwidth.
>>It's not clear what the source or assumptions are behind the IOPS per disk by type in your posting, because the values are close but different than what NetApp uses as data points for reference purposes.
Can you post the values NetApp uses as data points for reference purposes?
--understanding these values can help validate sizing tool output, help compare proposals from other vendors, etc, and are one more data point in the solution generation process