from my point of view it is a valid configuration and it should work. Is the connection between the servers an the storage established? Are you able to send a vmkping? If yes than the most common problems are: Wrong volume security style (must be unix), wrong or missing exportpolicy (cDot), wrong or missing export in the /etc/exports (7-Mode).
Direct attached is not ideal, but it should work. Does the links come up? If not, flip your tx/rx on one end. The next question is how is everything cabled?
For both hosts to see both volumes, you'll need either HP server cabled to both controllers. I'm assuming you have SAS shelf owned by one controller and the SATA by the other, correct? Lets assume that SAS is owned by STOR1A and SATA is owned by STOR1B and is cabled as follows:
stor1a:e1a -> server1:eth0
stor1a:e1b -> server2:eth0
stor1b:e1a -> server1:eth1
stor1b:e1b -> server2:eth1
ESX on server 1 would then have access two two data stores on stor1a via 192.168.2.12 and one datastore on stor1b via 192.168.2.22.
ESX on server 2 would have access to the same data stores, but the two SAS ones would be via 192.168.2.13 and the SATA one would be via 192.168.2.23.
Not having a switch isn't ideal, but I'm assuming the problem is budgetary and you're working with what you have. Since you don't have switches, I'd be tempted to drop the single mode ifgrp configuration in favour of setting "cf.takeover.on_network_interface_failure" to "on" and then setting the "nfo" option in the ifconfig command for the e1[a,b] interfaces. That way you're always running on 10gig, if you lose a 10gig interface the HA pair will just fail to the side where both interfaces are still good. The problem with this configuration however is that if you reboot an ESX host, you could induce an unexpected failover. You could prevent this by disabling cf before rebooting ESX hosts.
Anyway, let me know if section 1 above is of any help. If you still can't see your volumes, paste the output of the command "exportfs" from both nodes in your reply.