VMware Solutions Discussions

The lay of the land in storage servers

KEVINBUCHS
4,258 Views

I am trying to get up to speed on storage technologies and network infrastructure trends. I'm interested in hearing thoughts on popularity of FC, FCoE and iSCSI. What about RDMA, especially iSER. Is there movement by NetApp or other vendors toward particular favorites. Is there an acknowledged performance winner? What is under development and what is ready to buy today?

The context of my questions is I am a lowly EE tasked with researching critical metrics for a high performance next generation data center. I have not looked into these topics in detail over the last month and a half, but I have lots more to learn.

1 ACCEPTED SOLUTION

stevebarsten
4,256 Views

I would agree that iSCSI is pretty ideal for most SAN solutions these days.  With the cost of 10Gbe becoming more affordable, the performance for your $ is definitely there.  In a virtualized environment, it requires less management from a networking perspective as well compared to FC.  Best bet is to find people who have already done an implementation similar to yours and learn from their mistakes.  Here is a great slideshow that will help you out a lot, even if it is a year old now.   http://www.slideshare.net/sfoskett/fcoe-vs-iscsi-making-the-choice-from-interop-las-vegas-2011

View solution in original post

4 REPLIES 4

KEVINBUCHS
4,258 Views

I get the status that over 50 people viewed this message, but nobody replied! OK, I'll break the ice.

A representative of Data Link, a Net App distributor, suggested that iSCSI is fully engaged in the market. FCoE is a newcomer. Agree or disagree?

What are you using for remote data access (one data center to another)?

KEVINBUCHS
4,258 Views

I get the status that over 50 people viewed this message, but nobody replied! OK, I'll break the ice.

A representative of Data Link, a Net App distributor, suggested that iSCSI is fully engaged in the market. FCoE is a newcomer. Agree or disagree?

What are you using for remote data access (one data center to another)?

stevebarsten
4,257 Views

I would agree that iSCSI is pretty ideal for most SAN solutions these days.  With the cost of 10Gbe becoming more affordable, the performance for your $ is definitely there.  In a virtualized environment, it requires less management from a networking perspective as well compared to FC.  Best bet is to find people who have already done an implementation similar to yours and learn from their mistakes.  Here is a great slideshow that will help you out a lot, even if it is a year old now.   http://www.slideshare.net/sfoskett/fcoe-vs-iscsi-making-the-choice-from-interop-las-vegas-2011

michaeldparker
4,256 Views

We have been predominately 2 to 4Gb FC for a number of years.  As Steve said, 10GbE has become more affordable and we were finally able to jump onto the 10GbE train this year.  I rapidly converted our ESX servers to NFS and iSCSI for the RDMs that we needed to keep.  So far performance has been excellent and I've been very pleased with the change; I have plans to begin moving our physical servers to iSCSI.  Renewal for our FC directors came up this year and it was very costly so I bought some FC switches to replace the directors and still saved money.  The switches don't have enough port count, but with the initiative to move over to iSCSI and NFS, that should not be an issue in the near future.  From what I've been reading, performance on 10GbE iSCSI is excellent and as long as we are able to obtain the same results, we'll be moved off FC in a few years.  As far as FCOE is concerned, I have not really understood why I'd want to go FCOE as it seems more complex and the infrastructure requirements were move demanding last I had read up on it.

Public