Your question is really confusing , but somewhat intriguing as well.
The best answer I can think of is, that any protocol used by NAS must be "understood" by the client OS - this is where CIFS & NFS protocols are coming from.
You can use optical cabling for running CIFS & NFS on top of TCP/IP, if it makes any difference.
And I also think that Enhanced Ethernet / Data Centre Bridging idea is basically about making Ethernet as "good" in certain areas (e.g. losslessness) as Fibre Channel.
agreed...mixing of the terms. But NetApp supports natively both SAN and NAS protocols.. intriguing is some of the cool stuff that is mixed with all running concurrently. For example, being able to loopback to lun device over nfs or the ability to ndmp backup a lun. But typically to best practice on a per volume basis use NAS (cifs, nfs, etc.) or SAN (fcp, iscsi, fcoe), but not both (although it can be for both but isn't recommended).
My concern is NAS was slow when compared to SAN , inorder to improvise that, if we use FC connectivity on top with TCP/ IP (as you mentioned above) will that make any difference ?
Barring some kind of experimental feature of hack I am not aware of, I reckon the answer would have to be "no". In theory there is such a thing as IP over FC but I've never heard of NetApp supporting it. The FC ports in the NetApp array are storage adapters not network interfaces.
If you're concerned about bandwidth restrictions on Ethernet you should consider 10 GBE. Interface groups/VIFs with LACP and nultuple IP aliases may also help to increase available bandwidth, depending on your requirements.
NAS isn't really slow compared to SAN. If you're using 10G links, there's not much you can't do. Oracle is certified for use on NFS, as is VMware. Do you have a specific app that has issues with NAS connectivity?