VMware Solutions Discussions

Igroup configuration for typical vSphere cluster?

BMETTEKENTLAW
3,602 Views

Hi All,

What is the recommended igroup configuration for multipathed vSphere 5.x in a typical HA cluster?

Here is some information on my expected environment:

- all storage for vSphere will be FC-attached LUNs on a standard HA pair of FAS3220s (7-mode)

- my environment will be qty=3 vSphere 5.1U1 or 5.5 hosts

- each host will have two single port FC adapters named FC0 and FC1

In this environment, which of the three options I have listed below would be the recommended way to configure our igroups?

Our expected IO load is fairly low.  We are therefore leaning toward implementing a fixed PSP for the sake of simplicity.  Would the recommended approach of how we configure igroups be different if we wanted to implement a round-robin PSP instead?

Option A

One igroup that includes both the FC0 and the FC1 WWPNs from each host (six total members)

Map each of the vSphere datastore luns to this single igroup

Option B

An "FC0 igroup" that includes just the FC0 WWPNs from each host (three total members)

An "FC1 igroup" that includes just the FC1 WWPNs from each host (three total members)

Map each of the vSphere datastore LUNs to both the FC0 igroup and the FC1 igroups

Option C

A "Host-A igroup" that includes both of the WWPNs from my 'HostA' system (two total members)

A "Host-B igroup" that includes both of the WWPNs from my 'HostB' system (two total members)

A "Host-C igroup" that includes both of the WWPNs from my 'HostC' system (two total members)

Map each of the vSphere datastore LUNs to all three of the above igroups

1 ACCEPTED SOLUTION

mikhailf
3,602 Views

Do option A, there's no reason to bother with other options. You'd better use ALUA with round robin (don't forget to enable alua on the igroup)

View solution in original post

3 REPLIES 3

mikhailf
3,603 Views

Do option A, there's no reason to bother with other options. You'd better use ALUA with round robin (don't forget to enable alua on the igroup)

JASON_PARKINS
3,602 Views

You've got some pro's and con's of each setup.

Adding a LUN to a single igroup requires minimal effort, you're basically mapping a LUN to that one single igroup

Same goes for removing the LUN

But...the difficult part comes in the visibility of the LUN, it's not easy to control what host can see a given LUN when they're all in a single igroup, however you're almost assured that all of your hosts

will be able to see all of your LUNs, it's a toss up either way and I'd recommend going with whatever you feel comfortable with. If your environment is continually growing, management of a single igroup with a large amount of LUNs can become a bit tedious. If that's the case, I'd recommend a host specific mapping (your option C). Keep in mind though that you have to be a bit more diligent in your LUN assignment with multiple igroups (ie. be careful not to map a LUN to the wrong group) this problem wouldn't exist with a single igroup. It's not an overly complicated process, but you do need to be aware when creating new LUNs that they're mapped to the correct igroup.

My recommendations would be for the smaller environments, use a single igroup. For a larger setup with numerous hosts with numerous LUNs, you'd probably want to go with multiple initiator groups.

For the PSP, if you want simplicity, you'd want ALUA with RR, Fixed path requires a bit of tuning to run optimally. (Keep in mind if you ever setup a VM and use MSCS, you'll need to use MRU or Fixed in that case, as RR isn't supported)

Hope this helps!

BMETTEKENTLAW
3,602 Views

Thanks for the advice.  We have indeed decided to use a single initiator group for the vSphere cluster.

Regarding our desire to use the fixed PSP, we saw this thread https://communities.netapp.com/message/109734 and some others outside of these communities that have made us quite gun shy about attempting to use RR.  Our understanding of our IO requirements has lead us to the conclusion that they are pretty low and able to be satisfied by a single 8 Gb FC link from each host.

Public