FAS and V-Series Storage Systems Discussions

Re: Netapp FAS vs EMC VNX


> netapp has transparent cluster failover in a metro cluster environment.

there is a transparent failover IF PROPERLY CONFIGURED 😉 we strongly suggest our customers to follow the given best practices and we actualy plan and roll out these practices with them, eg setting proper time outs, install host utils etc.

for your total dr scenario, a netapp MC cannot handle a site disaster if the complete datacenter receives a power outage, you have to do a "cf forcetakeover -d" then, there are a few caveats we lead our customers around, so you just have been unproperly consulted ;-( we have several big strech/fabric metroclusters who takeover/giveback within 10-15 seconds without any system going down.

> if you are not happy with ibm, is there any reason why you wouldnt buy directly from netapp and its partners to have a proper FAS3240AE and not an N Series?

ok, seems like a political/sales issue, you might be able to solve it with your local netapp sales representative or at least i'd stick with ibm before buying an emc machine

good luck mate! 😉

Re: Netapp FAS vs EMC VNX


Hi, D from NetApp here (www.recoverymonkey.org).

Autotiering is a really great concept - but also extremely new and unproven for the vast majority of workloads.

Look at any EMC benchmarks - you won't really find autotiering.

Nor will you find even FAST Cache numbers. All their recent benchmarks have been with boxes full of SSDs, no pools, old-school RAID groups etc.

Another way of putting it:

They don't show how any of the technologies they're selling you affect performance (whether the effect is positive or negative - I will not try to guess).

If you look at the best practices document for performance and availability (http://www.emc.com/collateral/hardware/white-papers/h5773-clariion-best-practices-performance-availability-wp.pdf) you will see:

  • 10-12 drive RAID6 groups recommended instead of RAID5 for large pools and SATA especially
  • Thin provisioning reduces performance
  • Pools reduce performance vs normal RAID groups
  • Pools don't stripe data like you'd expect (check here: http://virtualeverything.wordpress.com/2011/03/05/emc-storage-pool-deep-dive-design-considerations-caveats/)
  • Single-controller ownership of drives recommended
  • Can't mix RAID types within a pool
  • Caveats when expanding pools - ideally, doubling the size is the optimal way to go
  • No reallocate/rebalancing available with pools (with MetaLUNs you can restripe)
  • Trespassing pool LUNs (switching them to the other controller - normal during controller failure but many other things can trigger it) can result in lower performance since both controllers will try to do I/O for that LUN - hence, pool LUNs need to stay put on the controller they started on, otherwise a migration is needed.
  • Can't use thin LUNs for high-bandwidth workloads
  • ... and many more, for more info read this: http://recoverymonkey.org/2011/01/13/questions-to-ask-emc-regarding-their-new-vnx-systems/

What I'm trying to convey is this simple fact:

The devil is in the details. Messaging is one thing ("it will autotier everything automagically and you don't have to worry about it"), reality is another.

For autotiering to work, a significant portion of your working set (the stuff you actively use) needs to fit on fast storage.

So, let's say you have a 50TB box.

Rule of thumb (that EMC engineers use): At least 5% of a customer's workload is really "hot". That goes on SSD (cache and tier). So you need 2.5TB usable of SSD, or about a shelf of 200GB SSDs, maybe more (depending on RAID levels).

Then the idea is you have another percentage of medium-speed disk to accommodate the medium-hot working set: 20%, or 10TB in this case.

The rest would be SATA.

The 10-million-dollar question is:

Is it more cost-effective to have the autotiering and caching software (it's not free) + 2.5TB of SSD, 10TB SAS and 37.5TB SATA or...

50TB SATA + NetApp Flash Cache?

Or maybe 50TB of larger-sized SAS +  NetApp Flash Cache?

The 20-million-dollar question is:

Which of the 2 configurations will offer more predictable performance?


Re: Netapp FAS vs EMC VNX


Hi D,

The devil is in the details. Messaging is one thing ("it will autotier everything automagically and you don't have to worry about it"), reality is another.

Couldn't agree more - with both sentences actually.

I was never impressed with EMC FAST - 1GB granularity really sucks in my opinion & it seems they have even more skeletons in their cupboard That said, e.g. Compellent autotiering always looked to me more, ehm, 'compelling' & mature. I agree it may be only a gimmick in many real-life scenarios (not all though), yet from my recent conversations with many customers I learned they are buying this messaging: "autotiering solves all your problems as the new, effortless ILM".

At the end of the day many deals are won (or lost) on the back of a simple hype...



Re: Netapp FAS vs EMC VNX


Compellent is another interesting story.

Most people don't realize that compellent autotiers SNAPPED data, NOT production data!

So, the idea is you take a snap, the box divides your data up in pages (2MB default, can be less if you don't need the box to grow).

Then if a page is not "hit" hard, it can move to SATA, for instance.

What most people also don't know:

If you modify a page that has been tiered, here's what happens:

  1. The tiered page stays on SATA
  2. A new 2MB page gets created on Tier1 (usually mirrored), containing the original data plus the modification - even if only a single byte was changed
  3. Once the new page gets snapped again, it will be eventually moved to SATA
  4. End result: 4MB worth of tiered data to represent 2MB + a 1-byte change

Again, the devil is in the details. If you modify your data very randomly (doesn't have to be a lot of modifications), you may end up modifying a lot of the snapped pages and will end up with very inefficient space usage.

Which is why I urge all customers looking at Compellent to ask those questions and get a mathematical explanation from the engineers regarding how snap space is used.

On NetApp, we are extremely granular due to WAFL. The smallest snap size is very tiny indeed (pointers, some metadata plus whatever 4K blocks were modified).

Which is what allows some customers to have, say, over 100,000 snaps on a single system (large bank that everyone knows is doing that).


Re: Netapp FAS vs EMC VNX


We just published a white paper on the NetApp Virtual Storage Tier. The intent here is to show how intelligent caching provides a level of "virtual tiering" without the need to physically move any data among HDD types.

Hope this sheds some light on our approach.

Re: Netapp FAS vs EMC VNX


Hi D,

Most people don't realize that compellent autotiers SNAPPED data, NOT production data!

Yep, I wasn't aware of this either. If that's the case then why actually Dell bought them? Didn't they notice that?

So how about 3Par autotiering? Marketing-wise they are giving me hard time recently, so I would love to discover few skeletons in their cupboard too!

Kind regards,


Re: Netapp FAS vs EMC VNX


Is the problem in the way they do it or the granularity of the block?

There is talk that Dell/Compellet will move to 64bit software soon, thus enabling them to have smaller blocks and then the granularity will problably not be a problem.

You could turn the argument around and say that Ontap never tiers down (transparently) snapshots to cheaper disk, no matter how seldom you access it.

So you will be wasting SSD/FC/SAS disk for data that you might, maybe, need once in a while.

Re: Netapp FAS vs EMC VNX


Well … I guess, NetApp answer to this would be snapvault,

For me one of main downsides of NetApp snapshots is inability to switch between them – volume restore wipes out everything after restore point; and file restore is unacceptable slow (which I still do not understand why) and not really viable for may files.

CLARiiON can switch between available snapshots without losing them. Not sure about Celerra, I do not have experience with their snapshot implementation.

Re: Netapp FAS vs EMC VNX


From what I'm told with snapvault the users loose the possibility of doing "previous versions" restore from snapvaulted snapshots of the files, right?

So the "transparently" thing is kicking in and a system administrator has to be involved, with all the extra work and time it takes to restore a file.

Re: Netapp FAS vs EMC VNX


That’s true (except that previous versions does not work with block access anyway).

Does it (previous versions from snapshots) work with other vendors for SMB? Celerra in the first place (given we discuss VNX)?

Re: Netapp FAS vs EMC VNX



The granularity is part of the problem (performance is the other). Page size is 2MB now, if you move it to 512K the box can't grow.

With the 64-bit software they claim they might be able to go real small like 64K (unconfirmed) but here's the issue...

The way Compellent does RAID with pages is two-fold:

  • If RAID-1, then a page needs to go to 2 drives at least (straightforward)
  • If RAID-5/6, a page is then split evenly among the number of drives you've told it to use for the RAID group (say, 6). One or two of the pieces will be parity, the rest data.

It follows that for RAID-1 the 64K page could work (64K written per drive - reasonable), but for RAID-5 it will result in very small pieces going to the various drives, not the best recipe for performance.

At any rate, this is all conjecture since the details are not finalized but even at a hypothetical 64K if you have random modifications all over a LUN (not even that many) you will end up using a lot of snap space.

The more stuff you have, the more this all adds up.

My argument would be that by design, ONTAP does NOT want to move primary snap data around since that's a performance problem other vendors have that we try very, very hard to avoid. Creating deterministic performance is very difficult with autotiering - case in point, every single time I've displaced Compellent it wasn't because they don't have features. It was performance-related. Every time.

We went in, put in 1-2 kinds of disk + Flash Cache, problem solved (in most cases performance was 2-3x at least). It wasn't even more expensive. And THAT is the key.

Regarding Snapvault: it has its place, but I don't consider it a tiering mechanism at all.

I wish I could share more in this public forum but I can't. Suffice it to say, we do things differently and as long as we can solve your problems, don't expect us to necessarily use the same techniques other vendors use.

For example, most people want

  1. Cheap
  2. Easy
  3. Reliable
  4. Fast

If we can do all 4 for you, be happy but don't dictate HOW we do it


Re: Netapp FAS vs EMC VNX


Thats why netapp invented FlexClone, you do NOT need to completly whipe the source, you could clone the backup and split it if it fits your needs.

Netapp FAS vs EMC VNX


EMC doesn't have Metrocluster in the VNX, but offers VPLEX Metro as an equivilent configuration. VNX + VPLEX can be the same cost as a NetApp Metrocluster. With their 5.0 code, they have transparent failover (no similair "cf takeover" command), if you install a witness at a third site running in a VM or standalone server.

It would be good for NetApp to offer similiar witness support to handle the total datacenter failure/split-brain scenerio. I'm currently comparing NetApp MetroCluster and EMC VPLEX Metro in my own blog http://dctools.blogspot.com.

No one ever has it all. EMC's VPLEX will rely on VNX or RecoverPoint to do snapshots. NetApp has snapshots nicely integrated into one package. NetApp doesn't offer redundent nodes at each datacenter, EMC VPLEX does. NetApp MetroCluster will have storage traffic trombone, EMC VPLEX will offer local access at each site.

The point is, no one vendor has everyone. Most are wearing blinders to what other's can do and their own limitations. I would love NetApp to offer sub-lun tiering within an aggregate. I would love NPIV-style virtual target FC ports into a vFiler. I would love NetApp to offer a web-based GUI to all the administrative commands people use (vFiler...).

VNX may have two different OSes for block and NAS, but customers don't usually see it or have to learn it, as Unisphere covers that up.

I am a huge NetApp fan and sell a lot of NetApp boxes. They work well. They offer some of the richest functionality, but the user interface (GUI, web-based) is often lacking in NetApp's best features.

Netapp FAS vs EMC VNX


Most disk storage vendors have support through previous versions.

Saying SnapVault is the answer is like saying RAINfinity is auto-tiering. It's a seperate poorly integrated product/option/feature.

Re: Netapp FAS vs EMC VNX


NetApp does provide witness support (MetroCluster tiebreaker); in the past it was separate solution (I believe integrated with OM); today it is offered as part of ApplianceWatch PRO. See as example http://communities.netapp.com/servlet/JiveServlet/downloadBody/6314-102-1-9571/Partner%20Academy%20Workshop%20MetroCluster%20June%202010.pptx or http://communities.netapp.com/servlet/JiveServlet/download/49558-22659/ApplianceWatchPROBestPracticesGuide.pdf

Unfortunately it is very poorly documented and marketed; the only available link is NetApp internal, couple of paragraphs in ApplianceWatch PRO documentation and whatever you can find on community or kb sites.

You mention in your blog that NetApp MetroCluster needs 4 FC connections – do you count backend only? Because MC requires 2 ISLs; 4 can be used but is optional.

I wonder how VPLEX implements simultaneous write support on both sites without introducing read latency for local access (due to necessity to verify that data had not been changed remotely).

Netapp FAS vs EMC VNX



You should check out Avere Systems (www.averesystems.com) or send me email at jtabor@averesystems.com.  We are working with lots of NetApp customers.  Rather than overhauling your entire environment to EMC, we can bring the benefits you need to your existing NetApp environment.  Here are the benefits we offer.

1) We add tiering to existing NetApp systems.

2) Our tiering appliances cluster so you can easily scale the tiering layer as demand grows.

3) We let you use 100% SATA on the NetApp.

4) We support any storage that provides an NFS interface, which opens up some cost-effective SATA options for you. 

5) We create a global namespace across all the NAS/SATA systems that we are connected to.

6) We tier over the WAN also to enable cloud infrastructure to be built. 


2021 NetApp Partner Experience Survey
PES Banner
All Community Forums