NetApp Storage Subsystem Design: Spinning Disks

Part 1 of a 4-part series on choosing media for NetApp FAS storage


By Tushar Routh, Sr. Manager, Storage Products, NetApp


NetApp continues to introduce new and enhanced storage technologies to increase the speed, security, availability and overall ROI of your storage systems. We spend a lot of effort designing and qualifying solutions to meet the broadest possible range of application needs.


But which technologies are the best for your specific applications, and what should you be thinking about as you plan future storage deployments?

In this four-part blog series, I’ll go through the major drive, shelf and storage interconnect choices available today – along with a glimpse at what’s coming down the pipe – to help answer these questions.


This series will focus on four areas:


  • Hard disk drives
  • Solid-state drives
  • Deployment guidelines, shelves and interconnects
  • Self-encrypting disks


HDD Deployment is Evolving

Deploying hard disk drives (HDDs) is no longer just about getting enough spindles to support your workloads; it’s increasingly about pairing drives with the right capacity and performance profile with the right flash options. Combining HDDs and flash to create hybrid storage really changes the calculus of storage deployment.


This is one of the key reasons NetApp is expanding its range of drive options. We let you mix various types of media in a single FAS storage system or a single cluster to address a wide variety of storage requirements.


Let’s take a look at what’s happening with two classes of spinning disks: high-capacity HDDs and performance HDDs, which still account for the bulk of the storage capacity shipped today.


High-Capacity HDDs

Back in 2002, NetApp pioneered the use of capacity-oriented drives for secondary storage instead of tape. A few years later – in large part due to the increased reliability made possible by RAID-DP – NetApp made it feasible to deploy this type of drive for primary storage.


In order to deliver maximum density and the lowest cost per gigabyte of capacity, high-capacity HDDs continue to use the large form factor (LFF) 3.5” format, typically with a rotational speed of 7,200 RPM.


Late last year, NetApp released a 4TB high-capacity drive for use in our 4U, 48-drive DS4486 high-capacity disk shelf, making NetApp the first major storage vendor to ship HDDs at this capacity. The combination of 4TB drives and the DS4486 disk shelf provides a level of capacity and density that makes it ideal for nearline, archive and backup applications. In April 2013, NetApp also released a separate 4TB drive for use in our 4U, 24-drive DS4246 disk shelf to address high-capacity storage needs for production workloads. These drives can be deployed by themselves or in combination with flash for greater performance.


Nearline SAS disk options – which combine the SAS interface with the media and rotational speed of enterprise SATA drives – are becoming the preferred choice for nearline and production workloads. Over the next year or two, we should see available options reach 5TB and 6TB capacity points. The SATA interface will remain the preferred choice for backup and archival workloads over that period.


Performance HDDs

The performance HDD market has moved away from 3.5” LFF HDDs to 2.5” small form factor (SFF) options. The major suppliers have signaled that 3.5”, 15K RPM LFF drives will be going out of production in the near future. NetApp introduced its 2U, 24-disk DS2246 disk shelf some time ago in preparation for this transition.


In the past, NetApp was primarily concerned with delivering low capacity points for this class of drive to allow you to deploy the number of spindles needed to meet performance requirements while minimizing overprovisioning of capacity. However, now that it’s become common to use a combination of performance disks and flash – either Flash Cache or Flash Pool – to address performance, we’ve begun making larger capacity performance drives available. We recently released a new 1.2TB, 10K RPM HDD that expands the range of SFF hard disk drives we offer. Even larger capacities are likely to come in the future, with drive manufacturers due to deliver 1.8TB SFF capacities in the next year or so. Over time, some of the lower capacity points will be phased out. There are 15K SFF drive options on the market as well; however, NetApp does not offer them because they cost 2-3 times more than 10K SFF drives. Because SSDs offer better I/O density, we believe that they are a better option than 15K SFF HDDs and are likely to displace them.


With more capacity points, you can combine Flash Cache, Flash Pool or Flash Accel with an optimum number of spindles to address the capacity and performance needs of your workloads. For example, suppose a particular application needs 20TB of capacity and 10K IOPS and has a cache hit rate of 80% with Flash Cache. The 1.2TB drives will deliver more than enough spindles to satisfy cache misses with good performance.


For existing workloads running on NetApp, you can use predictive cache statistics (PCS) to predict what your cache hit rate will be for a given amount of flash. This helps dial in the right amount of flash and HDDs before you invest in new media.


HDDs: Wrap Up

For the short term, you’re HDD choices will continue to consist of the types of high-capacity and performance drive options I’ve described.

In the longer term, we see a few new classes of spinning media coming along:


  • Drives intended for archival with a lower price point and a low-duty cycle are in development.
  • Cloud or big data drives are primarily for situations where 3 or more copies of data are maintained. NetApp sees a possible role for this class of drive for backup storage.
  • Hybrid drives that combine HDD and flash technology in a single device are just starting to appear in the market. The use case for these is currently limited to certain server environments.


In the next post in this series, I’ll talk about SSD options.


Hi Tushar,

That was good insight into how over time NetApp has pioneered the foray of certain disk form factors into the market and the amount work in the form of testing of drives and working on numbers like ARR,AFR and MTBF  . As the way applications needs are changing and the needs of customers are changing deploying the right solution is becoming increasingly interesting ( not sure if that is the right positive word ! )  and the sort of noise we hear about solid state hybrid drives and maybe impact of memristors as a  basic fourth circuit element could definitely change a few things carrying ahead..

Often road map discussions are pretty difficult but definitely do appreciate sharing information about the plans ahead as well and the way things are being looked at..

Looking forward to the upcoming  parts of your series..



Hi Bino,

Thank you for your comments.  I wanted to touch on the Hybrid Drives you mentioned.  We have embarked on this hybrid solution called Flash Pool where the system decides the best utilization of the resources.  And you get to decide whether you want to add more of SSD or HDD or both depending on your use case.  The hybrid drives do not offer this flexibility because you must add both.  Also, you cannot deploy a pure SSD system using Hybrid Drives.  So, having the modularity of SSDs and HDDs separately does provide maximum flexiblity for the user.

Thanks again