Re: Netapp FAS vs EMC VNX

Hi Niek,

* FAST cache can have a big performance impact (moving data around)

Just to be pedantic - I think this bullet point should say "FAST can have a big performance impact".

There are two different things with very confusing names:

- FAST as automated sub-LUN tiering which moves data around (

- FAST Cache which in essence mimics NetApp Flash Cache / PAM II, with no data movement, just dynamic caching (



Re: Netapp FAS vs EMC VNX

Hello Radek,

you're correct !

Thanks for the clarification.



Re: Netapp FAS vs EMC VNX

Yup Agreed


Re: Netapp FAS vs EMC VNX

I agree with niek.

Look at the VNX it is no different with celerra box added on functionality of Clariion and Avama. But they did not bled it good enough and too many software you need to learn.

Snap sure for NAS

Snap View for Block

Replicator for iSCSI

where else with netapp just one simple snapshot cover it all. It could be more technology involve when it comes to the replication and also database consistency aware software. You can look it at the video that publish at the youtube they might looks easy but when the actual setup and performance tuning comes in you notice that it is a nightmare.

Netapp technology is easier to setup and also learning cycle is shorter compare to VNX

Re: Netapp FAS vs EMC VNX

Hi there,

im regularly involved in pre-sales and 9 out of 10 times the customer choses a netapp solution over an existing emc solution. and that 1 customer who choses the emc solution has to do so because he was forced to do so by his boss. but lets stop the political stuff, go to the facts:

the vnx series is nothing else as a rebranded clarion/celera nas-head combination which emc is selling for ages, its old whine in new bottles. you still have to work on different layers of operating systems if the navisphere gui failes to give you the specific wizard driven task you need.

netapp has transparent cluster failover in a metro cluster environment, vnx doesnt. we have post processed dedup over all primary data as well as optional inline compression over all primary data. we have proper thin provisioning as well as up to 255 snapshots, even integrated in windows previos version client. we dont hassle around with linux or windows ce or whatever, we "talk" cifs native with proper acl integration as well as v1-4 nfs and we even support both, means we map windows to unix users and vice versa.

the thing about qnap/dlink nas stuff, you really dont want to go down that road. we are talking about netapp ENTERPRISE storage for a reason. we talk about 520 fromated hard disks or a 8+1 sektor checksumming, we talk about constant scrubbing and self aware fast raid rebuilds and a 24x7 4h service level, nothing of which those small nas can provide. and have 10 users + working on a 3 disk nas, try to reach 50mb+ file transfers using smb2 and have up to 255 snaps per volume. if they are not aware of these security and reliability features, let them have their nas and one day it will crash on them and all their data is lost.

if you are not happy with ibm, is there any reason why you wouldnt buy directly from netapp and its partners to have a proper FAS3240AE and not an N Series?



Re: Netapp FAS vs EMC VNX

> netapp has transparent cluster failover in a metro cluster environment.

Yes, but I had a bad experience with failover that didn't work during a power failed. This time due to a mixture of human factor and incomplete signaling on the main distribution powerboard (fixed after the incident).

We did buy metrocluster to handle that kind of situation, just to find out the hard way that when we completly lost power to one of the datacenters, the one thing we missed to check when we bought the equipment, metrocluster didn't kick in!

Instead that half simply stopped working, just to do a failover (!) the moment we got power back. So I had to do another failover to get it back to normal. And I seriously dislike failover procedure as it is today as I can't check anything before Ontap stops the service on the working (redundancy) node.

I had expected that the other node kicked in and took over when we lost the power, now the complete virtual system stopped for 2h. Not a good PR for neither IBM/Netapp or the virtual system.

Later I found out it is per design not to fail over in case a whole datacenter is lost, but you have to do a manual forced cluster failover, including an unconfortable failback afterwards.

VNX not having anything close to metrocluster is good to know. I will ask them how they handle situations like that.

> if you are not happy with ibm, is there any reason why you wouldnt buy directly from netapp and its partners to have a proper FAS3240AE and not an N Series?

Well, we do have a sizable investment in the IBM N-series and while I realy feel like moving to "pure" Netapp would provide us with with a better support/access to code earlier etc... it would mean replacing all the hardware, due to support contract reasons.

I doubt I could convince any boss to do that investment, unless Netapp steps in with a sizable buyback. But it will be on the discussion as IBM is not the list of cleared companies for Storage resales to Swedish goverment (inlcuding universities) since 3 years back (public tender reasons, nothing strange about that, Hitachi is missing too).

Re: Netapp FAS vs EMC VNX

> netapp has transparent cluster failover in a metro cluster environment.

there is a transparent failover IF PROPERLY CONFIGURED ;-) we strongly suggest our customers to follow the given best practices and we actualy plan and roll out these practices with them, eg setting proper time outs, install host utils etc.

for your total dr scenario, a netapp MC cannot handle a site disaster if the complete datacenter receives a power outage, you have to do a "cf forcetakeover -d" then, there are a few caveats we lead our customers around, so you just have been unproperly consulted ;-( we have several big strech/fabric metroclusters who takeover/giveback within 10-15 seconds without any system going down.

> if you are not happy with ibm, is there any reason why you wouldnt buy directly from netapp and its partners to have a proper FAS3240AE and not an N Series?

ok, seems like a political/sales issue, you might be able to solve it with your local netapp sales representative or at least i'd stick with ibm before buying an emc machine

good luck mate! ;-)

Re: Netapp FAS vs EMC VNX

Hi, D from NetApp here (

Autotiering is a really great concept - but also extremely new and unproven for the vast majority of workloads.

Look at any EMC benchmarks - you won't really find autotiering.

Nor will you find even FAST Cache numbers. All their recent benchmarks have been with boxes full of SSDs, no pools, old-school RAID groups etc.

Another way of putting it:

They don't show how any of the technologies they're selling you affect performance (whether the effect is positive or negative - I will not try to guess).

If you look at the best practices document for performance and availability ( you will see:

  • 10-12 drive RAID6 groups recommended instead of RAID5 for large pools and SATA especially
  • Thin provisioning reduces performance
  • Pools reduce performance vs normal RAID groups
  • Pools don't stripe data like you'd expect (check here:
  • Single-controller ownership of drives recommended
  • Can't mix RAID types within a pool
  • Caveats when expanding pools - ideally, doubling the size is the optimal way to go
  • No reallocate/rebalancing available with pools (with MetaLUNs you can restripe)
  • Trespassing pool LUNs (switching them to the other controller - normal during controller failure but many other things can trigger it) can result in lower performance since both controllers will try to do I/O for that LUN - hence, pool LUNs need to stay put on the controller they started on, otherwise a migration is needed.
  • Can't use thin LUNs for high-bandwidth workloads
  • ... and many more, for more info read this:

What I'm trying to convey is this simple fact:

The devil is in the details. Messaging is one thing ("it will autotier everything automagically and you don't have to worry about it"), reality is another.

For autotiering to work, a significant portion of your working set (the stuff you actively use) needs to fit on fast storage.

So, let's say you have a 50TB box.

Rule of thumb (that EMC engineers use): At least 5% of a customer's workload is really "hot". That goes on SSD (cache and tier). So you need 2.5TB usable of SSD, or about a shelf of 200GB SSDs, maybe more (depending on RAID levels).

Then the idea is you have another percentage of medium-speed disk to accommodate the medium-hot working set: 20%, or 10TB in this case.

The rest would be SATA.

The 10-million-dollar question is:

Is it more cost-effective to have the autotiering and caching software (it's not free) + 2.5TB of SSD, 10TB SAS and 37.5TB SATA or...

50TB SATA + NetApp Flash Cache?

Or maybe 50TB of larger-sized SAS +  NetApp Flash Cache?

The 20-million-dollar question is:

Which of the 2 configurations will offer more predictable performance?


Re: Netapp FAS vs EMC VNX

Hi D,

The devil is in the details. Messaging is one thing ("it will autotier everything automagically and you don't have to worry about it"), reality is another.

Couldn't agree more - with both sentences actually.

I was never impressed with EMC FAST - 1GB granularity really sucks in my opinion & it seems they have even more skeletons in their cupboard That said, e.g. Compellent autotiering always looked to me more, ehm, 'compelling' & mature. I agree it may be only a gimmick in many real-life scenarios (not all though), yet from my recent conversations with many customers I learned they are buying this messaging: "autotiering solves all your problems as the new, effortless ILM".

At the end of the day many deals are won (or lost) on the back of a simple hype...



Re: Netapp FAS vs EMC VNX

Compellent is another interesting story.

Most people don't realize that compellent autotiers SNAPPED data, NOT production data!

So, the idea is you take a snap, the box divides your data up in pages (2MB default, can be less if you don't need the box to grow).

Then if a page is not "hit" hard, it can move to SATA, for instance.

What most people also don't know:

If you modify a page that has been tiered, here's what happens:

  1. The tiered page stays on SATA
  2. A new 2MB page gets created on Tier1 (usually mirrored), containing the original data plus the modification - even if only a single byte was changed
  3. Once the new page gets snapped again, it will be eventually moved to SATA
  4. End result: 4MB worth of tiered data to represent 2MB + a 1-byte change

Again, the devil is in the details. If you modify your data very randomly (doesn't have to be a lot of modifications), you may end up modifying a lot of the snapped pages and will end up with very inefficient space usage.

Which is why I urge all customers looking at Compellent to ask those questions and get a mathematical explanation from the engineers regarding how snap space is used.

On NetApp, we are extremely granular due to WAFL. The smallest snap size is very tiny indeed (pointers, some metadata plus whatever 4K blocks were modified).

Which is what allows some customers to have, say, over 100,000 snaps on a single system (large bank that everyone knows is doing that).