Eugene Kashpureff wrote:What do you do with that CPU power ?
All the netapp hardware upgrades I did the last 7 years were because of being cpu bound on the filer. And no, they were not low end filers.
But that was also because netapp did a bad job with multi threading back then. Already with ontap 7.3 I see improvements and I am really looking forward to installing ontap 8.0.1.
Shane, I can definitely see the flip side. Once you buy your filer and the licenses, you have the full use of it including upgrades .. and we are seeing some nice bonuses these days too, such as sanitization and OSSV clients becoming free to SV licensors. I agree that this does keep things simple.
I assume it must be possible to get the 6080s to be CPU bound, otherwise NA would have tremendous difficulty getting people to upgrade to the bigger and flasher filers we have today I guess if you have a bunch of SnapMirror transfers going (with compression), especially in synchronous mode, combined with dedupe runs (and now compression runs) it would be possible to max it out with the right amount of storage.
What I am getting at is this. Any time somebody comes on the forum and asks a question about the FAS2020, it inevitably ends up concluding that the system is too slow for what the guy is trying to do. So although he has the right licenses and so on, he can't do what he was probably told was possible by his NA reseller. That's not good business.
Eugene, it's not Netapp-bashing. No company is perfect for everyone and I understand the pros and cons. We bought a filer recently and I would have no hesitation making the same decision again. From that point of view, money talks.
There's really no point in discussing your lab benchmarks if you can't reveal them, or at least reveal the specs you testes against. I'd expect ZFS and so on to be slower than WAFL/ONTAP. That problem is easily solved, as one can easily build a ZFS box with 24 cores and 64GB RAM. That costs about $20,000 which which I guessing is not far away from the list price of a FAS2040 with no disks, a single CPU core and 4GB RAM and some basic block access licensed. Now you might well argue that's not fair, as you're not comparing like with like. But the customer will be looking at the total cost.
I'd still take a NetApp over a ZFS box with that spec any day, because at work we aren't really a UNIX shop and we don't have a large sysadmin team who could babysit a Solaris box. NetApp gives us end-to-end support covering all aspects of the hardware and software, which is fully integrated (I love AutoSupport!), and which more than pays for the price difference, and you still can't do clustered ZFS yet (although they do have RAID-Z2/Z3). But if we were bigger, and had a few Solaris storage-savvy guys to hand, then it would be a lot harder to justify.
My other point here is that WAFL and snapshots are under attack. ZFS's low level snapshot implementation doesn't have WAFL's limitations (255 per flexvol? for example) and as I said earlier, btrfs is in development with Oracle doing another equivalent. I still don't really understand what OnCommand is other than a rebranding of NA's (excellent) management toolset. I'd agree that the management tools are a key differentiator here but it won't take long for competitors to come up with equivalents.
NetApp rocks, that's not the point. My point is that they put a slow CPU in their midrange. Now if 6210 pricing is reasonable it's really not an issue. But if not...
For instance, in a VDI setup, most of the I/O will be served from cache, especially during boot storms, so it will be CPU/Memory bound so you have to go to 62xx series to get good performance for a few thousand users.
To set the record straight, I occasionally receive this question from partners. Last I checked, the 3200 Series CPUs include:
- FAS3210 = 1-socket, 2.33GHz Intel Wolfdale, 2-core/socket
- FAS3240 = 1-socket, 2.33GHz Intel Harpertown, 4-core/socket
- FAS3270 = 2-socket, 3.0GHz Intel Wolfdale, 2-core/socket
Other than the curiosity factor, there's much more to NetApp than speeds `n feeds.
Answer your question?
But you can't argue that the CPU is not an important factor.
The question is, what your software / OS does with the ressources.
Even though I find FAS2020 a crap I am always amazed what they get out of that lousy Celeron CPU!
On Insight there was a interesting track where they showed how many servers you need to fill the CPUs of an 6280 - I don't have the numbers in mind, waiting for the presentation to download - but to sum up: Ontap is obviously getting much more out of the ressources than even the used Linux.
OS-software engineers at NetApp: well done!
brendanheading wrote:That problem is easily solved, as one can easily build a ZFS box with 24 cores and 64GB RAM. That costs about $20,000 which which I guessing is not far away from the list price of a FAS2040 with no disks, a single CPU core and 4GB RAM and some basic block access licensed. Now you might well argue that's not fair, as you're not comparing like with like. But the customer will be looking at the total cost.
So you have built that super-box for $20000 - now you must pay your sales guys, your support organisation, handle your stock for RMA, develop and maintain the whole software stack etc. etc....and you really think this will be cheaper in the end?
But hey: we all know cattle cycles, and luckily we are not at the end of it, like it happened to Desktop, Servers, Flatscreens, Network equipment....they all became commodities. So everyone in the storage market is making prices just as high as possible, giving discounts like mad when the customer is important or strong enough in negotiation...that's business, and you all know the game!
Mark, I got that $20,000 price by going to the dell.com website and customizing their fastest 4-socket rackmount server.
Whether or not it will be cheaper in the end is hard to say. For me it wouldn't be, because I work for a small organization with 2-3 sysadmins so it makes no sense to have them customize and build a complex Solaris/ZFS setup. The key benefit of the Netapp product is the end-to-endness of it; one provider, one support contact number, and no quibbling over what third party stuff has been upgraded inside the box. A bigger company, especially one which is highly IT-focused which can effectively afford to operate a 10-man sysadmin team would be a different case.
Agreed it is very impressive. This is a benefit which flows from having a custom, purpose-built OS rather than Linux or Solaris which are fundamentally general-purpose. I imagine that ONTAP does pretty much everything in kernel space and is able to take a lot of liberties with MMU page sizes, CPU caching policies etc.
But yes, ONTAP certainly does rock.
For sure! That's why the previous 6000 series included an opteron 8xx series, known for being "dirt cheap" and "slow" at the time. If you're going to make claims, perhaps try ones that aren't both trollish and completely inaccurate.
FAS3210 - 2 CPU 64-bit dual-core 2.3 GHz - 4 cores - 1 GB nvram - 8 GB sys. mem.
What I can tell you:
FAS3210 - 2 CPU 64-bit dual-core 2.3 GHz - 4 cores - 1 GB nvram - 8 GB sys. mem.
FAS3240 - 2 CPU 64-bit quad-core 2.3 GHz - 8 cores - 2 GB nvram - 16 GB sys. mem.
FAS3270 - 4 CPU 64-bit dual-core 3.0 GHz - 8 cores - 4 GB nvram - 32 GB sys. mem.
Great discussions, folks! Here are a couple facts to further along this discussion:
Our latest FAS/V3200 systems use the Intel Xeon CPU family leveraging 64 bit multi-core processors. The reason we do not highlight the CPU model used is because overall storage system performance, scalability, and expandability depends not only on processor type used, but the overall hardware system architecture and the tight integration and tuning of ONTAP. Yes as noted in earlier discussions, NetApp spends tremendous amount of R&D investment in the overall storage system’s performance, function and reliability, in order to satisfy our enterprise and MSE customer’s storage requirements. Our goal is to continuously provide our customers with the best overall storage solution that is tightly integrated with their applications. These solutions need to be delivered at a optimized price point, while carrying forward all existing features and functionalities.