Subscribe
Accepted Solution

Load balancing between Filers for LUNs?

My exchange admin is needing two LUNs being used for the DAG.  I am trying to see what the best practice is when creating luns that will mirror each other.

Should i create each LUN in a different aggregate on the same filer?

Should i create each LUN in different aggregates on different filers?

Or am i just over thinking this and it doesnt even matter in the end. 

Im asking this because we have just started loading up our FAS3270 (ontap 8.0.1P3 7-Mode) and I want to make sure I understand how to spread out the usuage so that i get the best performance.  Thanks

Re: Load balancing between Filers for LUNs?

HI,

What I understand from your question is you need to mirror a LUN so that you have exact copy at some place.

Snapmirror will mirror at volume or qtree level.

Usually what we do is, Have primary LUN in aggrA of controller1.

Snapmirror it to the secondary controller. (to other aggr on controller2)

So that incase your primary filer goes down, you will have your data in the secondary filer. Once you get back the primary filer you can resync and update the snapmirror as before.

Is this what you are looking for?

Thanks,

  Arun

Re: Load balancing between Filers for LUNs?

Im sorry i didnt really explain this very well.  In an exhange DAG it stores active/passive copies on each datastore.  So they are mirrors of each other but its more like a cluster.  Snap mirror wouldnt really play a role in this type of situation because Exchange would be handling the copys.

I more looking for a best practice on spreading out LUNs to get the best performance or to not cause to many issues on aggregates.  Thank you though for the info.

Re: Load balancing between Filers for LUNs?

Hi,

Due to the nature of Exchange DAG this would be my preferred option:

- create each LUN in different aggregates on different filers

Should a filer fail for whatever reason, the DAG will remain online.

Regards,

Radek

Re: Load balancing between Filers for LUNs?

Thank you Radek.