We have performance problems since upgrading to new filer heads running at 4Gb. Our old heads ran at 2Gb.
HP c-Class c7000 blade enclosure
10 x HP ProLiant BL465c G5/G6/G7 blades running VMware ESX 4.0 U2
Each blade has 2 x QLogic QMH2462 HBAs. HBAs running latest firmware as per HP's firmware update DVD
2 x NetApp FAS3140 heads running ONTAP 8.0.1P3 7-Mode, Active/Active
NetApp Virtual Storage Console installed and recommended settings implemented on all ESX hosts
2 x Brocade 200E and 2 x 300 FC switches, running firmware 6.2.2e
Setting the FC switch port for an ESX blade to 4Gb causes very high disk latency (as seen within VMware) and rapidly accumulating errors on the FC switch
Change the port speed to 2Gb and all the problems disappear
Rebuilding a blade to Windows 2008 R2 has exactly the same issues (works at 2Gb, but errors at 4Gb)
Physical Dell Windows servers running Exchange at 4Gb on the same FC switches do not have problems. These servers are not in our c-Class enclosure and they have different HBAs.
We only started getting this problem at about the time when the NetApp filer heads were changed and set to 4Gb. The old filer heads ran at 2Gb. Until the filer heads were changed, all the ESX blades could run with their FC switch ports set to 4Gb with no problems.
So, we think the problem is either the QLogic HBAs, or an issue with the c-Class enclosure, or some combination of these. We are going to borrow an Emulex HBA for the c-Class blades to test with – this will tell us if it’s an issue with the QLogic HBAs.
Has anyone else seen a similar issue before? Any ideas on what to test next?
We finally found the root cause of this. It is the FC passthrough modules in our HP c-Class enclosure. HP sent us a replacement one and that works like a dream. No more errors logged on the FC switch and performance is great at 4Gb.
You might also want to try to upgrade to 6.3.2b Brocade FOS ... 6.2.2 is a bit buggy... Keeping things updated on the ONTap 8.0.x side of things is probably going to be a regular activity as well as it seems to be as buggy as its twin with the 7.3.x tag...
Well … there will obviously be some interruption on this port before link is established on new speed. On the total, if you have multiple paths from hosts to filer and use multipathing software, it should be transparent.