Need to know if there are any current whitepapers revolving around NetApp Dynamic Queue Depth Management and how it relates to queue depth sizing in VMware.
FC ESX environment 3.5V4, 10 ESX server cluster with approx 200vm's,Qlogic HBA's, FAS3070 cluster.
The last blog I recall was from Nick Triantos back in 2007. I also have the old 5/2007 queue depth paper as well. The reason I am asking is because with the new Vsphere 4.0 NetApp best practice guide they are recommending a queue depth of 64 for the ESX server. We typically set our hba’s on our windows hosts to 128 and let the filer manage queue depth via its Dynamic Queue Depth Management. What are the specific reasons for setting queue depth to 64 on the ESX host when the filer performs all of the queue depth management via Dynamic Queue Depth Management?
Need to get a current best practice direction on all of this.
Well as you can see relative to the vSphere release both docs are dated. That said, there';s a
reason VMware and we make the 64 cmd recommendation. The reason is, that increasing the queue depth beyond 64 for some HBAs (i.e Qlogic) in an ESX environment does not make a bit of difference because the maximum parallel execution of SCSI operations is 64 (Execution Throttle).
Thanks Nick. So for clarification....The maximum parallel execution of SCSI operations of 64 setting applies only to an ESX environment? If this were a non-ESX environment....Windows host with Q logic HBA's....you could set the queue depth to 128 and let the filer manage queue depth via it's dynamic queue depth management?
VMware ESX uses two Fibre Channel drivers: Emulex and Qlogic. Both drivers support changing the HBA queue depth to improve performance. For Emulex, the default driver queue setting is 30. For Qlogic driver version 6.04, the default is 16, and for version 6.07 and 7.x (in VMware Infrastructure 3), the default is 32. Although Emulex has a maximum of 128 queue tags (compared to 255 for Qlogic), increasing the queue depth in either driver to above 65 has little effect. This is because the maximum parallel execution or queue depth of SCSI operations is 64.
Update 4 introduced an adaptive queue depth algorithm from the ESX host. by default, this algorithm is disabled.
So if the setting is disabled what should the Qdepth be for HBA.
From what I understand from the feature , this helps avoid over congestion to single luns from multiple ESX hosts . Also all hosts must have this feature enabled in order to use it . Lets say that X hosts have it enabled and Y hosts dont have it configured . The Y hosts may consume the resources/slots on the array that are freed up by the adaptive hosts.
I also see that 3 par is the one that has pushed this enhancement for esx3.5 u4 . Is this because 3 par is the only one that has issues with supporting a qdepth larger than 32 for multiple esxhost and Virtual machines ??
"This algorithm was well tested against 3PAR arrays and hence limiting the configuration to 3PAR arrays only for now."
so that being said . If im guaranteeing a qdepth of 128 for High I/O workloads because I have sized the netapp for the correct amount of HBA's for eaxh esx server would i even need to employ the adaptive queue depth algorithm from the ESX host for NetApp??