ONTAP Hardware

Managing fan-in/fan-out ratio

tyrone_owen_1
12,894 Views

Can anyone give me some tips on how I manage fibre channel fan-in and fan-out ratios please?

Say I have a 100 hosts, each host has two initiator 8Gbps ports connected to a clustered storage array with 8 x 8Gbps FC target ports (total of 16 ports in the cluster) using two switch fabrics for resilience (A and B fabrics). How would I balance the load across each of the target ports? Would I zone so that there is 1 initiator port and 16 target ports, then use MPIO to ensure the balance across portd? Would I just use a sub-set of the target ports? If the latter, how do you ensure you have balanced the hosts across the target ports - do you keep a spreadsheet or something a bit more sophisticated?

Any value you can add outside these questions would be appreciated.

Thanks

12 REPLIES 12

tyrone_owen_1
12,839 Views

I'm guessing this must be a dumb question because of the lack of responses. My background is IP storage and I get conflicting information on how best to set up FC initiator to target port ratios which is why I ask the question. I also can't find a definitive in NetApp docs - if someone can find a link I'd appreciate it.

BESTFRIENDQUOTES
12,839 Views

The fan-out ratio is the number of hosts connected to a port on a storage-area network array.

Many methods have been used to determine the optimum number of hosts per storage port, but in my experience there are no hard and fast rules. My recommendation would be to assess the throughput of each host you want to connect to a particular port, determine the maximum throughput of that port, and add hosts to the port so that the total throughput of all hosts is slightly higher than the throughput of the port.

It is very important to ensure that comprehensive port utilisation statistics are available to detect any periods when the port is heavily utilised and could be causing a performance bottleneck.

tyrone_owen_1
12,839 Views

Thank Mickey.

How many target ports would you assign to each initiator port? For example:

1. One-to-one relationship, i.e. one initiator port to one target port?

2. One initiator port to two target ports on the same storage node?

3. One initiator port to two target ports, i.e. one target port from each clustered target node?

4. One initiator port to four target ports, i.e. two target ports from each clustered target node?

5. One initiator port to the maximum number of available target ports?

Once again, apologies if these are basic questions but I can't seem to find the answers from the NOW site.

Any links you can provide would be helpful

Thanks

tyrone_owen_1
12,839 Views

Anyone please?

tyrone_owen_1
12,839 Views

I'm guessing this is either a dumb-question, or a question which doesn't have a straighforward answer - I'll be happy with a link!

tyrone_owen_1
12,839 Views

I guess all I'm asking is how many target ports would you present up to an initiator port - as many as there is available, one from each clustered head, etc, etc, ect?

tyrone_owen_1
12,839 Views

Anyone please?

zhjuve_bower
12,839 Views

There is no ideal predictive technique. That is an iterative process, create some basic rule and then amend it if necessary.

As already said find out the load each server will generate (or suppose to generate). Also remember about redundancy, you should plan different zoning for quad port and dual port target adapters. For just 16 ports it is quite easy and depends mostly on your redundancy requirements cause you haven't that much bandwidth to share and just 100 hosts.

As a first iteration (assuing two-port target HBAs):

* balance across available target ports bearing in mind that in case of takeover all load will be served by one head (either leave reserved capacity or accept a risk), here will be smth like first 12-13 hosts with 2 initiator ports ->zoned to-> both heads' port 0a/1a and 0b/1b respectively for each initiator port

hawkstar1
12,839 Views

Best practice is one to one zoning, ie: one target and one initiator per zone.  I have over 10,000 fiber channel ports globally and what we do is create two zones per initiator to storage targets on each fabric.  Each fabric has multiple connections to storage devices and we balance out which target ports to use based on both how many initiators are zoned to a particular storage port and the load usage on the storage ports.  These change port to port based on the server connections and the server activity so I might have several ports that have 30+ initiators zoned to them and some ports that only have a few servers zoned to them as they are busier boxes as far a load goes.  With this set up each server has 4 logical paths to the same storage and I have not seen to many servers that are able to handle the amount of IO that 4 8Gb paths could handle, or even 2 8Gb physical connections but if needed just add more physical connections on each fabric with additional target zoning.

tyrone_owen_1
9,059 Views

Thanks for the replies guys, I'd written this thread off.

Finding out the load each server will generate is the golden ticket, but in my limited experience this fact is not very often known with regards to new implementations.

NetApp seem to recommend the single-initiator, multiple targets approach to zoning but I understand the reasons why you would use single-to-single Hawkstar1.

What I think I can see from both your answers is that at a minimum a single initiator is mapped to two target ports from each controller in a cluster, and you can always add more if you need to, is that a fair summary?

Thanks very much

tyrone_owen_1
9,059 Views

I spoke with a Netapp PS consultant asn he suggested the following:

Alias the filers ports like this

H3-BSPSANMC1-1-2a

H3-BSPSANMC1-1-2b

H3-CTCSANMC1-1-2a

H3-CTCSANMC1-1-2b

Then the server ports like this

Server1-1a

Server1-1b

Then create your zones like so:

Zone- Server1-1a = H3-BSPSANMC1-1-2a, H3-CTCSANMC1-1-2a, Server1-1a

Zone- Server1-1b = H3-BSPSANMC1-1-2b, H3-CTCSANMC1-1-2b, Server1-1b

Hope this helps.

bruce_breidall
9,059 Views

Hey there. I stumbled upon this thread, and it matches up with the exact scenario we have, and are trying to confirm. It seems to me that the topic of faning at the HBA level is foreign to the netapp community, because I thought you described the scenario very well; but, nobody seems to understand what you are trying to confirm. This practice is common with HDS and EMC, in fact, I would recommend a minimum of 2 storage target ports (FAs) to one host inititator (HBA), per fabric, in all cases. For performance reasons, in some cases, I have seen 4 FAs to 1 HBA per fabric. Does netapp support this method of increasing q-depth at the host? Not really sure. Multi-path is the key  to the reasons for doing it. None of the netapp documents spell it out, but they do imply 2 to 1, because if you look at the FC Configuration Gd, the diagrams show a 2 to 1 in the diagram, but do not comment on this aspect of storage FC best practices.

If I find an answer, which I am still trying to do, I will reply to this again.
Thanks.

Public