Your understanding of ifgrp and LIFs/VLANs is correct.
However, I have a couple of comments on your current setup. First, your use of LIFs/VLANs over a 4-member ifgrp on one node and 4 physical ports on another node is puzzling. Why not use ifgrps on both nodes?
Second, DNS balancing LIFNFS-1 and LIFNFS-2 seems reasonable enough, especially if your NFS clients coming into these LIFs are relatively transient. However, for LIFVMWARE-1 and LIFVMWARE-2, I would be more careful about which esx host mounts off of which LIF. Assuming equal I/O resources on both nodes, if you have datastore1 and LIFVMWARE-1 on node1 and datastore2 and LIFMWARE-2 on node2, then I would have half of your esx hosts mount off of LIFVMWARE-1:datastore1 and the other half mount off of LIFVMWARE-2:datastore2. I'm also assuming all your esx hosts have roughly equal amount of load. By introducing DNS load balancing, you could end up with mounts of LIFVMWARE-1:datastore2 and LIFVMWARE-2:datastore1, which would lead to indirect access to your volumes. Indirect access is where the client's request comes in through node1's LIF, goes over the cluster interconnect, before getting to the datastore on node2. More CPU overhead to process cluster interconnect traversal, which leads to higher latency. Just not good in general. If you have your datastore volumes on one node, then just use the local LIF, and use the second LIF as a failover LIF. Proper system sizing effort would have given you a good balance among CPU, disk, and network pipes where you won't have to rely on contortions like this. If you've added a lot more disk shelves over time, then make sure your network bandwidth is also scaled up to allow your storage controller to keep up.