Subscribe

Managing fan-in/fan-out ratio

Can anyone give me some tips on how I manage fibre channel fan-in and fan-out ratios please?

Say I have a 100 hosts, each host has two initiator 8Gbps ports connected to a clustered storage array with 8 x 8Gbps FC target ports (total of 16 ports in the cluster) using two switch fabrics for resilience (A and B fabrics). How would I balance the load across each of the target ports? Would I zone so that there is 1 initiator port and 16 target ports, then use MPIO to ensure the balance across portd? Would I just use a sub-set of the target ports? If the latter, how do you ensure you have balanced the hosts across the target ports - do you keep a spreadsheet or something a bit more sophisticated?

Any value you can add outside these questions would be appreciated.

Thanks

Re: Managing fan-in/fan-out ratio

I'm guessing this must be a dumb question because of the lack of responses. My background is IP storage and I get conflicting information on how best to set up FC initiator to target port ratios which is why I ask the question. I also can't find a definitive in NetApp docs - if someone can find a link I'd appreciate it.

Re: Managing fan-in/fan-out ratio

The fan-out ratio is the number of hosts connected to a port on a storage-area network array.

Many methods have been used to determine the optimum number of hosts per storage port, but in my experience there are no hard and fast rules. My recommendation would be to assess the throughput of each host you want to connect to a particular port, determine the maximum throughput of that port, and add hosts to the port so that the total throughput of all hosts is slightly higher than the throughput of the port.

It is very important to ensure that comprehensive port utilisation statistics are available to detect any periods when the port is heavily utilised and could be causing a performance bottleneck.

Re: Managing fan-in/fan-out ratio

Thank Mickey.

How many target ports would you assign to each initiator port? For example:

1. One-to-one relationship, i.e. one initiator port to one target port?

2. One initiator port to two target ports on the same storage node?

3. One initiator port to two target ports, i.e. one target port from each clustered target node?

4. One initiator port to four target ports, i.e. two target ports from each clustered target node?

5. One initiator port to the maximum number of available target ports?

Once again, apologies if these are basic questions but I can't seem to find the answers from the NOW site.

Any links you can provide would be helpful

Thanks

Re: Managing fan-in/fan-out ratio

Anyone please?

Re: Managing fan-in/fan-out ratio

I'm guessing this is either a dumb-question, or a question which doesn't have a straighforward answer - I'll be happy with a link!

Re: Managing fan-in/fan-out ratio

I guess all I'm asking is how many target ports would you present up to an initiator port - as many as there is available, one from each clustered head, etc, etc, ect?

Re: Managing fan-in/fan-out ratio

Anyone please?

Re: Managing fan-in/fan-out ratio

There is no ideal predictive technique. That is an iterative process, create some basic rule and then amend it if necessary.

As already said find out the load each server will generate (or suppose to generate). Also remember about redundancy, you should plan different zoning for quad port and dual port target adapters. For just 16 ports it is quite easy and depends mostly on your redundancy requirements cause you haven't that much bandwidth to share and just 100 hosts.

As a first iteration (assuing two-port target HBAs):

* balance across available target ports bearing in mind that in case of takeover all load will be served by one head (either leave reserved capacity or accept a risk), here will be smth like first 12-13 hosts with 2 initiator ports ->zoned to-> both heads' port 0a/1a and 0b/1b respectively for each initiator port

Re: Managing fan-in/fan-out ratio

Best practice is one to one zoning, ie: one target and one initiator per zone.  I have over 10,000 fiber channel ports globally and what we do is create two zones per initiator to storage targets on each fabric.  Each fabric has multiple connections to storage devices and we balance out which target ports to use based on both how many initiators are zoned to a particular storage port and the load usage on the storage ports.  These change port to port based on the server connections and the server activity so I might have several ports that have 30+ initiators zoned to them and some ports that only have a few servers zoned to them as they are busier boxes as far a load goes.  With this set up each server has 4 logical paths to the same storage and I have not seen to many servers that are able to handle the amount of IO that 4 8Gb paths could handle, or even 2 8Gb physical connections but if needed just add more physical connections on each fabric with additional target zoning.