It looks to me like you're configured properly, which is great. Could you get me the following information:
navigate to the queue folder on one of your DM devices. To do this: cd /sys/block/dm-12(for example)/queue/
Then grep all of the information in that directory. Run: > grep . *
Please send the me the grep output.
Could you also send me a support bundle? If possible, collect it once, throw it away, run your workload again and then collect the support bundle again so we get a clean log. To collect the support bundle open up your Santricity System Manager in a browser. Navigate to Support --> support center --> diagnostics --> collect support bundle.
We have a total of 8 32Ge ports and 4 16ge ports and we are using all of them. Three servers connected to the storage, with two of the servers attached at (4 * 32Ge) and the third one attached at (4*16Ge).
Each of the 3 servers is connected at (4 * 25ge) network.
We have a similar setup but with different storage and are seeing 2X performance, which makes me think that the problem is purely with storage amd not with application(GPFS) or linux configuration or host/network connectivity.
Yes, system is in production but we can run short synthetic tests.
What kind of sequential streaming performance can be expect from a E5700 with 212 10TB drives? Note - Block size is fairly large (8MB)
These are regular SATA 7200 RPM 10TB drives. With 100% sequential reads, all 8MB block size with ~15 - 20 concurrent IOs per LUN, we are seeing about 300MB/sec for a single (8+2) RAID6 lun. We have 21 LUNs, so ~6GB/sec.
How can I identify the bottleneck. The bottleneck is not the HBAs or the servers or the network.