FAS and V-Series Storage Systems Discussions

FAS 3020c Cluster in 100% sync, how ?



We have setup a FAS3020 here, fuly clustered using the SinglePath way between the controllers and shelfs. We need to because we use SAS and SATA that can't be mixed up in the same loop so we actually need 2 Single Path loops between the Filers.

That said, I'm wondering if it's possible that filer 01 and filer 02 are the same in Aggr's en Volumes so that when on Filer 01 (which will be leading) is added a Volume that this will be added on Filer 02 also and the data between the shelfs will be synced by default and you actually can do Load Balancing on it.

We have the Licenses for Clustering and so on.

Is this possible and which way do I need to think ?





Hi All,

I'm again testing some new setup on this as I'm doing the following:

I create a volume on an Aggregate on Filer 01 and I actually want to have this Volume automaticly created on Filer 02 so it can be "mirrored".

The reason why I want this is because I add the volume to the Filer 01 by the API and I don't want to add the same volume to the other filer every time manually as it can happen that I want to resize the volume on Filer 01.

So it seems I need to mirror on Aggregate Level to my other "Filer" that is also the Failover head in this setup in the picture above. It seems that you need a cluster_remote license for this and build an Metro cluster that is actually on the same site with two meter cable between it ?

Because Aggregate Sync is a level lower than SnapMirror you should be more flexible with it and thing "above" it will be synced automaticly, see it as raid1 over a network.

What are my best options here, I have the feeling a Metro-Cluster ?


Hi Matt

You are correct with finding that you cannot snapmirror on an aggregate level. The only option on aggregate level is "local_sync_mirror" and is demanding you to provide separate redudnant diskshelfs with connections to the controller(s).

If you have many volumes to be created and snapmirrored and you are referring to the work this is meaning for the storage admin, you could alos consider to use the "Workflow Automator" available for free on the NetApp Supportsite and use the already provided Workflows for this.

So you have 3 options (expensive first then cheaper):

1. go for MetroCluster

2. by more diskshelfs and connections and use "local_sync_mirror"

3. Download the WFA and use the available WorkFlows

Hope this helps,



Hi Peter,

Thank you for your reply.

I'm very intrested in the local_sync_mirror, but how would need to be cabled when you want to use active/active also ?

Active/Active works very perfect, I have 2 switches trunked together over a LAG and each filer is on it's own switch.

The issue is that I have Both FC and SATA shelves that I dont' want to mix, some people say you can I actually would like to stay away from it.

Let's say:

- I have 4 shelves, 2x FC 2x SATA

- 2 Filers

To get the local_sync_mirror work I need 4 loops on one filer because each aggr needs it's own pool so each shelf needs it's own loop as a pool requires it's own loop.

The issue there is that I dont' know how I'm going to attach my second filer for the active/active as it doesn't have any disks to boot from and I also have the issue that when I own all disks in my active/active setup to filer 01 (except of the 3 ones for the filer 02) I cannot add them to the loop1 as the 3 disks are still on pool0...

I hope you can understand my issue here

Thanks so far!



I fixed this by conneting a single path MetroCluster, works great!



As Peter said, the only real option here is to use aggregate mirroring between the two controllers.  So here is what it would look like:

Filer01:aggr1 (read/write) is mirrored to Filer02:aggr1 (read only)

I'm not sure if this is quite what you were referring to but with 7-Mode (as opposed to Cluster or C-Mode) this is really the only option to achieve what I think you are describing.

But that brings me to a big question:  Why?

If this is 7-Mode and is configured as an HA pair then you have fault tollerance in the event of a controller failure as the other controller will take over but that requires that each loop be present on each controller.

Each 3020 controller has four onboard FC ports, I'm assuming the following configuration (assuming DS-14 disk shelves with AT-FCX modules and that you don't have an FC PCI cards installed)

Filer01, port 0a - initiator - connected to loop1 (SATA)

Filer01, port 0b - initiator - connected to loop2 (SAS)

Filer01, port 0c  - not used or set as a target for client connections

Filer01, port 0d -  not used or set as a target for client connections

Filer02, port 0a - initiator - connected to loop1 (SATA)

Filer02, port 0b - initiator - connected to loop2 (SAS)

Filer02, port 0c  - not used or set as a target for client connections

Filer02, port 0d -  not used or set as a target for client connections

If this is the case, then you are in HA but don't have dual paths to each loop which is not recommended but will work.

Is this accurate?



If this is 7-Mode and is configured as an HA pair then you have fault tollerance in the event of a controller failure as the other controller will take over but that requires that each loop be present on each controller.

This is what I have now indeed, like here but with 4 shelfs in 2 loops over:


Filer01, port 0a - initiator - connected to loop1.1 (SAS)

Filer01, port 0b - not used or set as a target for client connections

Filer01, port 0c  - initiator - connected to loop2.1 (SAS)

Filer01, port 0d -  not used or set as a target for client connections

Filer02, port 0a - initiator - connected to loop2.1 (SAS)

Filer02, port 0b - not used or set as a target for client connections

Filer02, port 0c  - initiator - connected to loop1.1 (SAS)

Filer02, port 0d -  not used or set as a target for client connections

This is what I need todo between for the SATA loops the same:

Filer01, port 0b => shelf(s) => Filer02, port 0d

Filer02, port 0b => shelf(s) => Filer01, port 0d

As I'm right  I thought?


Sorry, had a brain fart with my example as you would never want to use 0a and 0b as the only disk connections as those share the same ASIC and are a single point of failure.

So, you are in a single path HA config. To configure multipath, you need to complete the following connections:

Filer01, Port 0b to Right Loop, Shelf 2, Top ATX module

Filer01, Port 0d to Left Loop, Shelf 3, Bottom ATX module

Filer02, Port 0b to Left Loop, Shelf 3, Top ATX module

Filer02, Port 0d to Right Loop, Shelf 2, Bottom ATX module

So back to my question: Why do you want to have a volume on Filer01 also be present on Filer02? You _can_ do that with aggegate mirroring but the mirror copy is read only and access to it is not load balanced between controllers in 7-Mode but can be load balanced in Cluster mode. If it's fault tollerance, you have that as Filer02 will assume the identity of Filer01 and vice versa if there is a failure of one of the controllers.


Hi Paul,

Thank you, I wamted to make that sure about the controllers, but I would like to have a "realtime backup/replication" just in case next to my production. So Filer02 can takeover Filer01, I already tested this, but I won't have a identical data copy just in case. That is what I would like to have between the two loops.

My other question is how IP's will be handled in such a takeover, is this migrated using HeartBeat ?

Thanks again.




In this case, SnapMirroring each volume to the other controller would be my recommendation. This doubles your storage requirements but provides a copy of the data on different disks. Of course, my _real_ recommendation would be to SnapMirror that off-site but that may not be an option.

Failover works because the system state of Filer01 is synchonously mirrored to Filer02 thru the cluster interconnects and vice versa. This means that when Filer01 fails, Filer02 creates a virtual instance of Filer01 with all of Filer01's addressed (WWNs, IPs, etc) intact.  A bit simplistic but that's the net effect.


Hi Paul,

Thanks, I will dive into the Snapmirror, this is indeed the way I want it. But it this not per 5 min base for an example or really realtime ?

The offiste backup is a different chapter and is already planned for.





really it depends on your requirements and load. If you need synchronous mirroring you can set up SnapMirror to do that there are considerations of workload on the controllers as the write must be confirmed by both storage controllers (to NVRAM) before an write confirmation goes to the host.  if there is a requirement for that then perhaps aggregate mirroring between controllers is the better solution.

I can understand the perceived need for this level of data protection but in most of the cases that I have ever been involved with, there wasn't a real scenario that required this level of data protection at the local site.

If your data is really that important, then it absolutely MUST be replicated off-site but I'm sure you are fighting that issue with the PHBs.


Hi Paul,

I wanted to thank you for your help here.

It's indeed needed to have such setup and have this level of data protection. The snapshot mirror will be mirrored externally over VPN is the idea. I think we don't need metro clustering in that case.

I'm only able to do a Vol mirror, not an aggregate one, am I right ?

The only issue I have with this setup, because I need to use 2 loops (one for SAS and one for SATA) per controller, is that I need to giveback manually because CIFS needs a restart in a singlepath cluster.


jumping back in...

You can have the "automatic" failover with setting up a "Stretched MetroCluster", check the HA Documentation of Ontap for more details.

You still have the two controllers with their shelfs, but you'll need todouble the shelfs in order to be able to mirror all the aggregates (yrs, this mirrors on aggregate level, not like snapmirror) and then you'll have 99% of possible failure scenarios covered. Manual intervention is only neccessary when a complete stack gets lost (fire, water etc...) and you want ALL the data and services on the remaining cluster controller. everything else is automagically.

SnapMirror lets you "replicate" the data within a volume (or qtree) to another volume (or qtree) using IP. The switchover to the replicated data in case of disaster is manual process...

Pick what makes most sense


The easiest way to get this accomplished would be to use "local syncmirror". This mirrors on aggregate level and therefor would provide you the "if I create volume her, create it there too" somewhat... But no load balancing...

But I guess to get the complete kind of functionality you would need to consider running Ontap in Cluster-Mode (and not 7-Mode). And that is a completely different setup then what you have today.

But maybe I misunderstand your question...


Hi Peter,

Thanks for your reply.

The loadbalancing can be done I think IF the data is the same on both nodes. LB-ing will be server-level, but the sotrage part will be more a failover that is really accurate. So indeed LB-ing is the wrong word here.

What do you mean in Clustermode ? Multipath ? The local syncmirror sounds good. Under what license is this applied ?



NetApp on Discord Image

We're on Discord, are you?

Live Chat, Watch Parties, and More!

Explore Banner

Meet Explore, NetApp’s digital sales platform

Engage digitally throughout the sales process, from product discovery to configuration, and handle all your post-purchase needs.

NetApp Insights to Action
I2A Banner