I was pleased when—based on the strong response my previous article on the FAS3200 series generated—Tech OnTap asked me to come back to talk about the design of the FAS6200 series.
Although the earlier FAS6000 series was a radical leap for its time, with more cores and more than 4X the memory, a lot has changed since it was introduced that needed to be accounted for. NetApp users naturally want more topline performance, and at the same time we've added new functionality to Data ONTAP® over the intervening years, such as deduplication and compression, that places new demands on storage system resources. That's a double whammy.
With the FAS6200 series our goal was to create a platform with plenty of headroom for both topline performance and important system tasks plus the capability to support a wide range of workloads—everything from archive to IOPS-intensive database loads—and to support those workloads simultaneously. That's a little like building a Maserati and a pickup truck at the same time, but we're excited by the results.
Figure 1) The FAS6200 series.
If you're not already familiar with the general features of the FAS6200 series (and the corresponding V6200 open storage controller models that let you manage disk arrays from EMC, IBM, Hewlett-Packard, Hitachi Data Systems, and other major storage vendors), a recent article by Chris Lueth and Mukesh Nigam does a good job of covering all the speeds and feeds. In this article I want to “take a look under the hood” and focus on a few specific topics:
The Processor/Memory Complex
The engine that drives all the advanced capabilities of Data ONTAP is the memory/processor complex. We looked at a wide variety of processors currently available and ultimately settled on the 4-core Nehalem and 6-core Westmere processors from Intel. We got pretty excited about these processors when we realized that we could nearly triple memory bandwidth versus our earlier platforms and boost the number of cores on a single controller from 8 to 12 (for the FAS6280). We were able to start shipping systems with the Westmere processors very close to the processor's release: the best alignment NetApp has ever achieved with the Intel® product schedule.
In addition to all that processing horsepower and memory bandwidth, we tripled the amount of memory for the platform, giving us 96GB per controller for the FAS6280. That gives us room to more easily drive topline performance and support new features, including NetApp® Flash Cache, which now ships in the majority of new systems.
Flash Cache reduces the number of spindles you need to achieve a given level of performance by as much as 75% and can also significantly reduce the latency of read operations. However every terabyte of Flash Cache consumes 4GB of system memory for page tables. So you can see that adding multiple terabytes of Flash Cache to a big system consumes significant memory. (It also uses up expansion slots, but I'll get to that in the next section.)
To round out the new systems, we did a completely new design for the nonvolatile RAM (NVRAM) that Data ONTAP uses to journal write requests. The NVRAM 8 design achieves over 1GB/sec of sustained write performance. When you consider the fact that the NVRAM processes data in smaller, network-sized chunks, this means that to achieve this level of performance the NVRAM has to do 1 million transfers/second. Each transfer has to be set up in a microsecond, which requires not just fast hardware but also extremely efficient interrupt routines to really make it sing and dance.
A NetApp storage system does 10 to 20 times more I/O per core than a standard server does. Large storage installations put up to 256 cores' worth of application processing power in front of a single NetApp storage system. That's a lot of I/O.
When we began talking with Intel about the Nehalem and Westmere processors, the standard Intel reference designs for implementing those parts only supported a single I/O chip (IOH). Because NetApp wanted all the I/O horsepower it could get, we approached Intel about supporting two IOH chips to double the I/O. We worked with Intel to make that happen and verified that the new design worked as expected.
Two IOH chips give us 72 PCIe gen 2 lanes, while a standard server design usually offers only 20 to 30 lanes. We break those lanes out further using switches to create 152 PCIe lanes of I/O connectivity within the FAS6280, with total internal bandwidth in excess of 72GB per second.
Our new chassis design lets you pair a controller module with 4 PCIe slots and an optional I/O expansion module (IOXM) with an additional 8 PCIe slots. This yields a total of 12 slots for a single controller or 24 slots for a typical HA pair. For comparison, the FAS6080 provided 3 PCIx slots and 5 PCIe slots. In addition to I/O expansion slots, the FAS6200 series also provides a substantial number of onboard 8Gb FC, 10GbE, and 6Gb SAS ports. (See Table 1.) If you don't need the extra expansion slots, you also have the option of choosing a very dense configuration that provides two controllers (an HA pair) in just 6U of rack space.
Table 1) Comparison of the three new FAS6200 series models with the FAS6080 (previous high end).
As I've already discussed, the additional slots can be used for Flash Cache. Plus, with the transition from FC to SAS disk taking place in the storage industry, we knew we needed to help facilitate that transition by providing on-board SAS and FC ports and by making sure that our storage systems could simultaneously support significant numbers of both types of ports, if required.
The onboard ports and additional expansion slots also make sure that the FAS6200 series is fully ready to support Data ONTAP 8 running in Cluster-Mode (C-Mode). You'll be able to support a wealth of 10GbE ports so that networking does not become a limitation to C-Mode configurations.
A New Level of Resiliency
For the FAS6200, we also wanted to raise the bar on reliability, availability, serviceability, and manageability (RASM) features. First, we've added a new feature to create a persistent write log. Battery-backed NVRAM is good for about 72 hours. With the new persistent write log feature, NVRAM contents are destaged to flash memory in the event of a dirty shutdown, protecting the write log indefinitely. On the next boot, the resulting NVLOG is simply replayed to restore the system to a consistent state.
We've also added a new service processor to the FAS6200 series that goes well beyond the capabilities of the remote LAN module (RLM) used in previous models. The service processor remains operational even when the rest of a storage system is down. It provides all of the features of the RLM, such as remote power cycle, call home notification of down system, and always-on access for troubleshooting. The service processor also adds new features that go beyond the capabilities of the RLM, including:
From an engineering standpoint, the FAS6200 resiliency feature I'm most excited about is the ability to go into the processor and read out its internal state even when it's not running. The combination of core dumps and internal processor state gives us the detailed forensics to understand exactly what was happening when a problem occurred so we can correct it. As we've done with previous features, we'll drive this capability down to the midrange and low end over time.
It's possible that I'm biased, but I believe that the FAS6200 series is a new milestone for NetApp. The platform boosts performance up to 3.6X over the FAS6000. Plus it provides dramatically more memory, more I/O bandwidth, and more expansion capability to simultaneously drive topline performance and important system tasks such as data protection, deduplication, and compression—all without sacrificing compatibility with the rest of the NetApp product line. We've added new features for even greater hardware reliability, and the platform is future ready for Data ONTAP 8 running in Cluster-Mode when you're ready for that transition.
Got opinions about the FAS6200?
Ask questions, exchange ideas, and share your thoughts online in NetApp Communities.
This NetApp Community is public and open website that is indexed by search engines such as Google. Participation in the NetApp Community is voluntary. All content posted on the NetApp Community is publicly viewable and available. This includes the rich text editor which is not encrypted for https.
- Software files (compressed or uncompressed)
- Files that require an End User License Agreement (EULA)
- Confidential information
- Personal data you do not want publicly available
- Another’s personally identifiable information
- Copyrighted materials without the permission of the copyright owner