ONTAP Hardware

CPU in FAS32xx



Does anyone know what CPUs are used in the new 32xx systems and at what frequencies? I saw that for 62xx the CPUs are Intel 55xx and 56xx on TheRegister.co.uk...



This information was never been listed oficially, but you can find some such details at SPEC SFS pages for tested systems.

For example: 3210 - single dualcore 2.3GHz Intel Xeon(tm) Processor E5220, 3270 - dual dualcore 3.0GHz Intel Xeon(tm) Processor E5240.

But in real life you haven't a reason to know it ;o)

View solution in original post



netapp filer info





To set the record straight, I occasionally receive this question from partners.  Last I checked, the 3200 Series CPUs include:

  • FAS3210 = 1-socket, 2.33GHz Intel Wolfdale, 2-core/socket
  • FAS3240 = 1-socket, 2.33GHz Intel Harpertown, 4-core/socket
  • FAS3270 = 2-socket, 3.0GHz   Intel Wolfdale, 2-core/socket

Other than the curiosity factor, there's much more to NetApp than speeds `n feeds.

Answer your question?


There is no quad-core in the Intel Wolfdale family (5200-serie) so I suppose the FAS3240 use Clovertown (5300-serie) or Harpertown (5400-serie) ?

Best Regards,


Typo on my part (you're correct!)  It's a Harpertown (as Tim also noted below).

Need more coffee next time!




What I can tell you:

FAS3210 - 2 CPU 64-bit dual-core 2.3 GHz - 4 cores - 1 GB nvram - 8 GB sys. mem.
FAS3240 - 2 CPU 64-bit quad-core 2.3 GHz - 8 cores - 2 GB nvram - 16 GB sys. mem.
FAS3270 - 4 CPU 64-bit dual-core 3.0 GHz - 8 cores - 4 GB nvram - 32 GB sys. mem.




Great discussions, folks! Here are a couple facts to further along this discussion:

Our latest FAS/V3200 systems use the Intel Xeon CPU family leveraging 64 bit multi-core processors.  The reason we do not highlight the CPU model used is because overall storage system performance, scalability, and expandability depends not only on processor type used, but the overall hardware system architecture and the tight integration and tuning of ONTAP.  Yes as noted in earlier discussions, NetApp spends tremendous amount of R&D investment in the overall storage system’s performance, function and reliability, in order to satisfy our enterprise and MSE customer’s storage requirements.  Our goal is to continuously provide our customers with the best overall storage solution that is tightly integrated with their applications.  These solutions need to be delivered at a optimized price point, while carrying forward all existing features and functionalities.

Sandra Wu


Product Marketing



This information was never been listed oficially, but you can find some such details at SPEC SFS pages for tested systems.

For example: 3210 - single dualcore 2.3GHz Intel Xeon(tm) Processor E5220, 3270 - dual dualcore 3.0GHz Intel Xeon(tm) Processor E5240.

But in real life you haven't a reason to know it ;o)

View solution in original post


Thank you, the spec sfs results are very interesting. It's a pity they chose to use three year old CPUs in their shiny new controllers just to make them slower compared to 62xx .

Of course it is important for system design, in my opinion NetApp filers are quite CPU/Memory bound. This answers many questions:

1. What should we buy, 3270 or 6210? - 6210 should be significantly faster, it has 6 memory channels compared to 2 on 3270, infiniband cluster interconnect, newer CPU, etc.

2. What is the difference between 3210, 3240 and 3270 in processing power?

3. It explains why 32xx is actually not that much faster compared to 31xx in benchmarks. Actually 3210 might be slower than 3140 (64292 with 144 disks versus 53546 with 96 disks for 3140)


Slow CPU's has been the NetApp trademark for years. They probably come cheap and they can charge the customer 1000% of the original price.

-- This message will soon be deleted


For sure!  That's why the previous 6000 series included an opteron 8xx series, known for being "dirt cheap" and "slow" at the time.  If you're going to make claims, perhaps try ones that aren't both trollish and completely inaccurate.


Thank you, the spec sfs results are very interesting. It's a pity they chose to use three year old CPUs in their shiny new controllers just to make them slower compared to 62xx .

I think you've touched on what will be the major issue in Netapp's medium/long term future, the idea that you can sell a box which is basically a modified IBM PC compatible in a rackmount box for ten times the cost of an equivalent rackmount server. NA's secret sauce is in the software and the support.

It would be so much better if NA sold their hardware closer to cost and then made their money on licensing, support and subscriptions. I'd rather have the choice of paying for more functionality by purchasing licenses rather than to have to get the up-front decision right when purchasing the filer itself. NA need to watch themselves; ZFS is a threat (or would be if Oracle/Sun knew what to do with it!) and BTRFS is just around the corner. We are not far away from the point where you have more people like Nexenta building boxes that can substantially replicate NA's offering.


I completely agree, they are too greedy.

Competition comes not only from Nexenta but also from local storage, SSDs are becoming common and people expect fast access to shared storage. How can you charge over 100k for a storage system that is slower than a local disk drive, even for only one user? Having a fast CPU/Memory is critical, they should have limited the number of cores in my opinion not the single-threaded performance by using three year old CPUs.


I dont understand where this 1990's mentality about storage is coming from.

The game has moved so much further than speeds and feeds, there is so much more to it and how big your processor is how many SSD's you have.

I get in trouble with my sales guys when i tell people if you want dumb cheap storage dont by netapp. Buy and equallogic or an eternus. IMO Netapps value comes from its software and intergration not how big a CPU its got. This is s tactic the local EMC guys had been using around here, the only point of difference they had was SSD (back then) so they tried to turn every sales engagement into a speeds and feeds debate. The netapp value proposition is so much more than that.

The new hardware platforms are an awesome step in the right direction, as is 8.0.1 the FAS6280 is going to be one of (if not THE) fastest array around, They're awesome chunks of hardware with software to boot.


NetApp rocks, that's not the point. My point is that they put a slow CPU in their midrange. Now if 6210 pricing is reasonable it's really not an issue. But if not...

For instance, in a VDI setup, most of the I/O will be served from cache, especially during boot storms, so it will be CPU/Memory bound so you have to go to 62xx series to get good performance for a few thousand users.

Shane, I agree about NetApp's value being the software. I think the highly orthoganal way the product is together, and the way clustering and mirroring "just work", with high levels of flexibility, is fantastic and I don't think the other vendors can touch it, not today. Then you have the simple nature of the product line, the fact that the same OS and concepts apply across the range, etc.

I also agree that SSDs are a bit of a red herring for all but the most high end cases. I am sympathetic to the NetApp line up until now which is that a high density, array-local cache can meet most performance requirements for people without requiring SSDs.

But you can't argue that the CPU is not an important factor. That's why NA sell three different classes of filer (and several different grades between each) and this is clearly important to the way they market filers to different segments. And obviously, NA are going after the low-cost vendors by providing SATA drives and the basic kit like the FAS20xx. All I am saying is that a business with low-end storage requirements might prefer the opportunity to cherry pick the occasional high-end feature. If the capital cost of the system was loaded further towards the licenses and away from the purchase price of the hardware, that is something that would be more feasible. ie, instead of marketing the FAS2020/2040/2050, why not just market the 2040 and then charge people per terabyte stored, or per SAN host, etc.

At the end of the day though it is for NA to decide how they run their business, and the current setup obviously works well for them, so as a customer you can't really complain much; you pay your money and take your choice. As I said, I think the product is fantastic and I'd recommend NA over the convuluted and conflicted product lines offered by HP/EMC/etc anyday. I'm looking forward to NA eclipsing EMC as #1 storage vendor, hopefully some time within the next 24 months


brendanheading wrote:

But you can't argue that the CPU is not an important factor.

The question is, what your software / OS does with the ressources.

Even though I find FAS2020 a crap I am always amazed what they get out of that lousy Celeron CPU!

On Insight there was a interesting track where they showed how many servers you need to fill the CPUs of an 6280 - I don't have the numbers in mind, waiting for the presentation to download - but to sum up: Ontap is obviously getting much more out of the ressources than even the used Linux.

OS-software engineers at NetApp: well done!



Agreed it is very impressive. This is a benefit which flows from having a custom, purpose-built OS rather than Linux or Solaris which are fundamentally general-purpose. I imagine that ONTAP does pretty much everything in kernel space and is able to take a lot of liberties with MMU page sizes, CPU caching policies etc.

But yes, ONTAP certainly does rock.

Its depends on what end of the market you are talking about.

For instance i've never managed to get a filer to be CPU bound, i've had a FAS6080 running 7.3.x doing 1.6GB/s (straight cache reads) with two CPU's idle.

As for "per op/cpu/whatever" licening, i'm not that fond of it.

i like the whole no surprises licensing model. Buying a FAS and then having to license it per CPU/op etc is a scary idea, i think with that you run the risk of becoming one of those "convuluted and conflicted product lines"

Shane, I can definitely see the flip side. Once you buy your filer and the licenses, you have the full use of it including upgrades .. and we are seeing some nice bonuses these days too, such as sanitization and OSSV clients becoming free to SV licensors. I agree that this does keep things simple.

I assume it must be possible to get the 6080s to be CPU bound, otherwise NA would have tremendous difficulty getting people to upgrade to the bigger and flasher filers we have today I guess if you have a bunch of SnapMirror transfers going (with compression), especially in synchronous mode, combined with dedupe runs (and now compression runs) it would be possible to max it out with the right amount of storage.

What I am getting at is this. Any time somebody comes on the forum and asks a question about the FAS2020, it inevitably ends up concluding that the system is too slow for what the guy is trying to do. So although he has the right licenses and so on, he can't do what he was probably told was possible by his NA reseller. That's not good business.

NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner