Subscribe

QLogic Execution Throttle Setting

I wanted to ask how some of you arrive at the optimal setting for the QLogic Execution throttle and also the Windows Storport driver to a lesser extent. Our environment consists of an active / active FAS3170 cluster. We have 2 FCP 8GB target ports on each filer so technically 4 targets (2 active / 2 partner) and then each machine has a dual port HBA so 2 8GB initiators.

For hosts / initiator machines, we have 19 VMWare hosts, 10 Windows hosts, 2 RHEL hosts and 1 Solaris machine for a total of 32 hosts or 64 initiators

I've spoken to some NetApp engineers as well as some of our vendors and seem to get different answers depending on who you ask. Here is how the NetApp engineer broke it down.

Max queue depth of the controller or a single target port 1720 (doesnt matter that we have 2 targets because you want to be able to survive the loss of 1 target)

1720 / 64 = 26 Execution Throttle

Any feedback would be helpful as we're looking to come thru on each machine and set the optimal value for the execution throttle. Thanks!

Re: QLogic Execution Throttle Setting

Queue depth on Qlogic (if you really mean queue depth, not execution throttle) is actually per LUN, so what is relevant is number of LUNs each host sees, not directly number of hosts. That is, if you expect all hosts to be active at full speed at the same time.

And that is not from “optimal” PoV but simply to prevent error responses/delays due to queue full condition. Personally I’d leave it on default unless you have real evidences that queue depth is causing issues.