ONTAP Hardware

CPU in FAS32xx

mikidutzaa
11,918 Views

Hello,

Does anyone know what CPUs are used in the new 32xx systems and at what frequencies? I saw that for 62xx the CPUs are Intel 55xx and 56xx on TheRegister.co.uk...

1 ACCEPTED SOLUTION

roman_verysell
11,884 Views

This information was never been listed oficially, but you can find some such details at SPEC SFS pages for tested systems.

For example: 3210 - single dualcore 2.3GHz Intel Xeon(tm) Processor E5220, 3270 - dual dualcore 3.0GHz Intel Xeon(tm) Processor E5240.

But in real life you haven't a reason to know it ;o)

View solution in original post

32 REPLIES 32

vmsjaak13
9,559 Views

If that is correct about the 6200 series, then I would guess Intel 5300 or 5400 series CPUs.

Frequencies and no. of cores information, you can find in the FAS/V32xx docs on the field portal.

Regards,

Niek

shane_bradley
9,559 Views

FAS6280's have dual X5670,

going by the speeds and core counts of the 6240/6210 its hard to guess what they are

mikidutzaa
9,559 Views

I am not a partner so I don't have access to the field portal, can you provide the information or is it a secret?

I find it hard to understand why it might be considered secret, it will become public sooner or later anyway when the first 32xx ships :)...

radek_kubka
9,559 Views

Does it really matter that much?

I actually don't know the answer, but am not to bothered to find one - at the end of the day NetApp states '32x0 are faster than 31x0' (which arguably is true) & that's good enough for me

Regards,

Radek

shane_bradley
9,559 Views

I dont believe they publish anything other than the number of cores and the speed

I dont know if its secret, i couldnt imagine it would be? From what i've heard they range from a 2.3 to 3GHZ

roman_verysell
11,885 Views

This information was never been listed oficially, but you can find some such details at SPEC SFS pages for tested systems.

For example: 3210 - single dualcore 2.3GHz Intel Xeon(tm) Processor E5220, 3270 - dual dualcore 3.0GHz Intel Xeon(tm) Processor E5240.

But in real life you haven't a reason to know it ;o)

mikidutzaa
9,935 Views

Thank you, the spec sfs results are very interesting. It's a pity they chose to use three year old CPUs in their shiny new controllers just to make them slower compared to 62xx .

Of course it is important for system design, in my opinion NetApp filers are quite CPU/Memory bound. This answers many questions:

1. What should we buy, 3270 or 6210? - 6210 should be significantly faster, it has 6 memory channels compared to 2 on 3270, infiniband cluster interconnect, newer CPU, etc.

2. What is the difference between 3210, 3240 and 3270 in processing power?

3. It explains why 32xx is actually not that much faster compared to 31xx in benchmarks. Actually 3210 might be slower than 3140 (64292 with 144 disks versus 53546 with 96 disks for 3140)

brendanheading
9,935 Views

Thank you, the spec sfs results are very interesting. It's a pity they chose to use three year old CPUs in their shiny new controllers just to make them slower compared to 62xx .

I think you've touched on what will be the major issue in Netapp's medium/long term future, the idea that you can sell a box which is basically a modified IBM PC compatible in a rackmount box for ten times the cost of an equivalent rackmount server. NA's secret sauce is in the software and the support.

It would be so much better if NA sold their hardware closer to cost and then made their money on licensing, support and subscriptions. I'd rather have the choice of paying for more functionality by purchasing licenses rather than to have to get the up-front decision right when purchasing the filer itself. NA need to watch themselves; ZFS is a threat (or would be if Oracle/Sun knew what to do with it!) and BTRFS is just around the corner. We are not far away from the point where you have more people like Nexenta building boxes that can substantially replicate NA's offering.

radek_kubka
9,559 Views

It would be so much better if NA sold their hardware closer to cost and then made their money on licensing, support and subscriptions.

It's a 100-years old argument.

The truth is, utilising industry standard components simplifies refreshing product line &, to be completely honest, vastly reduces cost.

EMC does exactly the same in their mid-range, so do many other storage vendors (not all though and/or not in their all product lines).

Regards,

Radek

brendanheading
8,740 Views

Radek, we are agree on those points. I am not arguing with the idea of using standard parts. This is a case where brute force wins. I doubt that a product could be engineered for a reasonable cost with custom ASICs that would be able to match the throughput that can be provided via the core density and performance on the modern Xeons.

However, the situation where NA can charge about ten times the price of an equivalently specced rackmount x86 server is not likely to be one which can be sustained in the long term. I think it's more of a problem at the low end. There is no sensible reason for the price difference between the 2020 and the 2040 for example. I'd rather get the better hardware and then pay an extra licensing fee to be able to attach more drives or more SAN hosts when I need them, rather than be forced to predict my growth over the next few years.

mikidutzaa
9,117 Views

I completely agree, they are too greedy.

Competition comes not only from Nexenta but also from local storage, SSDs are becoming common and people expect fast access to shared storage. How can you charge over 100k for a storage system that is slower than a local disk drive, even for only one user? Having a fast CPU/Memory is critical, they should have limited the number of cores in my opinion not the single-threaded performance by using three year old CPUs.

shane_bradley
9,117 Views

I dont understand where this 1990's mentality about storage is coming from.

The game has moved so much further than speeds and feeds, there is so much more to it and how big your processor is how many SSD's you have.

I get in trouble with my sales guys when i tell people if you want dumb cheap storage dont by netapp. Buy and equallogic or an eternus. IMO Netapps value comes from its software and intergration not how big a CPU its got. This is s tactic the local EMC guys had been using around here, the only point of difference they had was SSD (back then) so they tried to turn every sales engagement into a speeds and feeds debate. The netapp value proposition is so much more than that.

The new hardware platforms are an awesome step in the right direction, as is 8.0.1 the FAS6280 is going to be one of (if not THE) fastest array around, They're awesome chunks of hardware with software to boot.

brendanheading
9,117 Views

Shane, I agree about NetApp's value being the software. I think the highly orthoganal way the product is together, and the way clustering and mirroring "just work", with high levels of flexibility, is fantastic and I don't think the other vendors can touch it, not today. Then you have the simple nature of the product line, the fact that the same OS and concepts apply across the range, etc.

I also agree that SSDs are a bit of a red herring for all but the most high end cases. I am sympathetic to the NetApp line up until now which is that a high density, array-local cache can meet most performance requirements for people without requiring SSDs.

But you can't argue that the CPU is not an important factor. That's why NA sell three different classes of filer (and several different grades between each) and this is clearly important to the way they market filers to different segments. And obviously, NA are going after the low-cost vendors by providing SATA drives and the basic kit like the FAS20xx. All I am saying is that a business with low-end storage requirements might prefer the opportunity to cherry pick the occasional high-end feature. If the capital cost of the system was loaded further towards the licenses and away from the purchase price of the hardware, that is something that would be more feasible. ie, instead of marketing the FAS2020/2040/2050, why not just market the 2040 and then charge people per terabyte stored, or per SAN host, etc.

At the end of the day though it is for NA to decide how they run their business, and the current setup obviously works well for them, so as a customer you can't really complain much; you pay your money and take your choice. As I said, I think the product is fantastic and I'd recommend NA over the convuluted and conflicted product lines offered by HP/EMC/etc anyday. I'm looking forward to NA eclipsing EMC as #1 storage vendor, hopefully some time within the next 24 months

shane_bradley
9,117 Views

Its depends on what end of the market you are talking about.

For instance i've never managed to get a filer to be CPU bound, i've had a FAS6080 running 7.3.x doing 1.6GB/s (straight cache reads) with two CPU's idle.

As for "per op/cpu/whatever" licening, i'm not that fond of it.

i like the whole no surprises licensing model. Buying a FAS and then having to license it per CPU/op etc is a scary idea, i think with that you run the risk of becoming one of those "convuluted and conflicted product lines"

ekashpureff
8,740 Views

All -

All of this talk about implementing commodity hardware with ZFS or otherwise on this forum is crap.

We've taken a look at provisioning our lab environments on ZFS and std hardware at Labshots/Kashpureff Inc and already found that a WAFL/NVRAM architecture beats it out any day.

Wish I could post our performance test, but I can't.

Performance is a broader subject than CPU ratings. What do you do with that CPU power ? How much bandwidth can you drive on the front end and the back end ? ONTAP has always been exceedingly good at doing what it does (serving up storage) using commodity hardware, and it continues to be highly optimized. ( Have you read 'How to Castrate a Bull ) ? Dave first tested this OS on commodity PCs.

It's a software company, not a hardware company. The controllers rock, but WAFL, snapshots and nowadays OnCommand - that's the value proposition here.

I'll invite any of the detractors here to take the time that we have to do implementation performance tests and then go tell their bosses that they should implement a cheap solution from one of the vendors trying to sell $9K jbod storage on ZFS or another solution. Bad news - It will be an RGE - Resume Generating Event.

Just my two bits. ( Sorry for the rampage, but I'm tired of the NetApp bashing on this thread ! )


I hope this response has been helpful to you.

: )

At your service,


Eugene E. Kashpureff
ekashp@kashpureff.org
Fastlane NetApp Instructor and Independent Consultant
http://www.fastlaneus.com/ http://www.linkedin.com/in/eugenekashpureff

(P.S. I appreciate points for helpful or correct answers.)

pascalduk
8,172 Views

Eugene Kashpureff wrote:

What do you do with that CPU power ?

All the netapp hardware upgrades I did the last 7 years were because of being cpu bound on the filer. And no, they were not low end filers.

But that was also because netapp did a bad job with multi threading back then. Already with ontap 7.3 I see improvements and I am really looking forward to installing ontap 8.0.1.

brendanheading
8,172 Views

Eugene, it's not Netapp-bashing. No company is perfect for everyone and I understand the pros and cons. We bought a filer recently and I would have no hesitation making the same decision again. From that point of view, money talks.

There's really no point in discussing your lab benchmarks if you can't reveal them, or at least reveal the specs you testes against. I'd expect ZFS and so on to be slower than WAFL/ONTAP. That problem is easily solved, as one can easily build a ZFS box with 24 cores and 64GB RAM. That costs about $20,000 which which I guessing is not far away from the list price of a FAS2040 with no disks, a single CPU core and 4GB RAM and some basic block access licensed. Now you might well argue that's not fair, as you're not comparing like with like. But the customer will be looking at the total cost.

I'd still take a NetApp over a ZFS box with that spec any day, because at work we aren't really a UNIX shop and we don't have a large sysadmin team who could babysit a Solaris box. NetApp gives us end-to-end support covering all aspects of the hardware and software, which is fully integrated (I love AutoSupport!), and which more than pays for the price difference, and you still can't do clustered ZFS yet (although they do have RAID-Z2/Z3). But if we were bigger, and had a few Solaris storage-savvy guys to hand, then it would be a lot harder to justify.

My other point here is that WAFL and snapshots are under attack. ZFS's low level snapshot implementation doesn't have WAFL's limitations (255 per flexvol? for example) and as I said earlier, btrfs is in development with Oracle doing another equivalent. I still don't really understand what OnCommand is other than a rebranding of NA's (excellent) management toolset. I'd agree that the management tools are a key differentiator here but it won't take long for competitors to come up with equivalents.

mheimberg
8,448 Views

brendanheading wrote:

That problem is easily solved, as one can easily build a ZFS box with 24 cores and 64GB RAM. That costs about $20,000 which which I guessing is not far away from the list price of a FAS2040 with no disks, a single CPU core and 4GB RAM and some basic block access licensed. Now you might well argue that's not fair, as you're not comparing like with like. But the customer will be looking at the total cost.

So you have built that super-box for $20000 - now you must pay your sales guys, your support organisation, handle your stock for RMA, develop and maintain the whole software stack etc. etc....and you really think this will be cheaper in the end?

But hey: we all know cattle cycles, and luckily we are not at the end of it, like it happened to Desktop, Servers, Flatscreens, Network equipment....they all became commodities. So everyone in the storage market is making prices just as high as possible, giving discounts like mad when the customer is important or strong enough in negotiation...that's business, and you all know the game!

Mark

brendanheading
8,448 Views

Mark, I got that $20,000 price by going to the dell.com website and customizing their fastest 4-socket rackmount server.

Whether or not it will be cheaper in the end is hard to say. For me it wouldn't be, because I work for a small organization with 2-3 sysadmins so it makes no sense to have them customize and build a complex Solaris/ZFS setup. The key benefit of the Netapp product is the end-to-endness of it; one provider, one support contact number, and no quibbling over what third party stuff has been upgraded inside the box. A bigger company, especially one which is highly IT-focused which can effectively afford to operate a 10-man sysadmin team would be a different case.

brendanheading
8,549 Views

Shane, I can definitely see the flip side. Once you buy your filer and the licenses, you have the full use of it including upgrades .. and we are seeing some nice bonuses these days too, such as sanitization and OSSV clients becoming free to SV licensors. I agree that this does keep things simple.

I assume it must be possible to get the 6080s to be CPU bound, otherwise NA would have tremendous difficulty getting people to upgrade to the bigger and flasher filers we have today I guess if you have a bunch of SnapMirror transfers going (with compression), especially in synchronous mode, combined with dedupe runs (and now compression runs) it would be possible to max it out with the right amount of storage.

What I am getting at is this. Any time somebody comes on the forum and asks a question about the FAS2020, it inevitably ends up concluding that the system is too slow for what the guy is trying to do. So although he has the right licenses and so on, he can't do what he was probably told was possible by his NA reseller. That's not good business.

Public