Seems like nowadays there are more benefits from going NAS when virtualizing than going SAN since 10Gb Ethernet prices are similar to Fibre Channel and offers more benefits and simplicity (my humble opinion). I find myself involved more often that I would like to in discussions with people defending Fibre Channel over Ethernet storage, and now FCoE shows up to make things even more complex (and expensive); why not just make your life easier and go NFS? or iSCSI if you want SAN?
I will repeat myself , as just responded in the same way in a different thread:
NFS is cool, but in most environments it is not enough - if SnapManager products are used, then disks with application data would normally be provisioned as Raw Device Mappings (over iSCSI or FC) - with one corner case of SnapManager for SQL, which can handle NFS VMDKs (well, and just Oracle sprang to mind, which can leverage NFS too).
I heard about Hyper-V supporting CIFS as VM storage as well, seems like everybody is going that way.
You can PXE boot your ESX severs (and most Linux/Unix servers out there) from NFS too, i have done this several times with CentOS and it works great, I have yet to try this with VMWare, but it is supported AFIK.
There's a blog from a guy named Chris Wolf about PXE booting ESXi servers, it is old, but the procedure hasn't change much since it was written: http://www.chriswolf.com/?p=18
Are you talking about Autodeploy? It has few wrinkles in my opinion - it is 100% stateless, so e.g. you need to use Host Profiles to properly customize your ESX hosts once they boot, and forwarding logs is also beneficial (otherwise they are stored in memory & lost upon a reboot).