What seems to be missing from the other post (I skimmed through it) is key detail: E-Series storage configuration.
Presumably you have mostly random access (with VMware clients), so IOPS are important, which also means that if your disks are not SSD then you need a lot of them to get IOPS you need.
For example with 10+ NL-SAS disks in RAID6 you may get only 2-3K 4kB IOPS.
If SANtricity IOPS perf monitor shows 2-3K IOPS, than that's it, you're maxing out IOPS. At the same time, if you look at 10GigE bandwidth utilization you'll probably discover it's low (3000 IOPS x 4kB = 12 MB/s).
It's not possible to add 2 SSDs to a disk group made of non-SSDs, so to increase IOPS you'd have the following choices:
- Add 2 SSDs as new R1 disk group and turn them into read cache (will help with reads, as long as write ratio is low (10-15%). If write % is higher than that, that may not help
- Add 2 SSDs as new R1, create 1 or 2 volumes on this disk group, create new VMware DS, and move busy VMs to these disks
- Add 5 or more SSDs and create a R5 disk group to get more capacity and more performance (same as the bullet above, but you'd get more usable due to a lower R5 overhead compared to R1
- Add more HDDs (I wouldn't recommend this if you have a lot of IOPS, it's cheaper to buy SSDs for IOPS)
You can also monitor the performance from VMware side. We also have a free way to provide detailed metrics using Grafana (https://github.com/netapp/eseries-perf-analyzer/; requires a Linux VM with Docker inside plus some Docker skills).
What else you can do: maybe check https://kb.netapp.com/Advice_and_Troubleshooting/Data_Storage_Systems/E-Series_Storage_Array/Performance_Degradation_with_Data_Assurance_enabled_Volum... - not sure if this applies to your array or not. It may help in marginal ways. If you have HDDs and are maxing out IOPS it probably won't help enough.