VMware Solutions Discussions

Why would someone recommend dedicated switches for storage network traffic?

sanman2304
9,021 Views

Hi there, need to understand why someone would recommend having a dedicated switch stack for iSCSI\NFS storage network traffic instead of just creating a layer 2 VLAN on a current available switch stack to segment the traffic.  This switch stack is a Cisco 3750E stack.

The customer this is being recommended to has 5 ESX hosts and a dual controller FAS2020.  So in total there would be 14 network connections for ISCSI\NFS storage traffic divided between the 2 Cisco 3750E 24 port switches.  As I am new to the storage and virtualization world I've read that what should be done is create a VLAN for storage traffic to segment it from the rest of the network.  But a fellow engineer is dead set on completely separating this to different physical switches just for the iSCSI and NFS, which in my mind is a waste of money.  Out of the combined 48 ports only 14 will be used and nothing else.

I would like some direction and if the physical switch stack is not needed then what can I say to plead my case?

7 REPLIES 7

txskibum2000
9,022 Views

Maybe he wants to put an IDS on it or a sniffer monitor the traffic?..but, if thats the case he would have gone the v1000 approach.  i dont know.

btbocesnoc
9,022 Views

We've had to use dedicated switches in a few localtions because the Network admins would reboot the core without making sure the servers using ISCSI connections were down.  Lots of data corruption.

reinoud7
9,022 Views

Hi, We use the same switching infrastructure for all our 24 netapp controllers and all our servers 32 ESX boxes and 50 physical servers.

The main raisons why we combine them are:

  • simplicity
  • price (infrastructure and maintenance)
  • when the client/host network goes down, it's not very helpful to still have your storage network. So that's no raison for setting up the two environments

The only raison for two separate fysical infrastructure:

  • you can indeed do maintenance on the client access layer and the storage layer is still ok.

Reinoud

rickymartin
9,022 Views

People (like me) recommend dedicated switches for storage network traffic for the same reason people buy SAN's or implement backup networks

1. Security - there are some well published vlan hopping and CHAP hacking techniques out there. I know some financial institutions that refuse to implement iSCSI for exactly that reason, though strangely enough they're quite happy with kerberised NFS.

2. Performance - many front end networks are built with oversubscription which is fine for front end traffic, but not so good for storage networks. Many of the accusations of poor performance for iSCSI in the past has been because the design goals that network designers build to are often not entirly suited to kind of low latency highly avaialble network infrastructures required for shared storage.

3. Politics - In some (especially larger) teams, the network is looked after by the networking group, and the storage is looked after by the storage group and they have their own budgets, personalities, change control procedures etc, and none of them seem to align with each other.

4. Reliability - This is a bit like the performance issue, but its part of the design goals of SAN designers to build redundancy into the fabric, its second nature. The reason is that the implications of a server losing connectivity to storage is typically worse than a client losing connectivity to a server. When LUNs/Datastors suddenly dissapear, filesystems need to run FSCK's, datbases can get "torn pages" and need recovering from tape, meteorites fall from the sky, crazed freedom fighters start shooting from streetcorners, and infrastructure guys get stern looks from the application and server teams.

Of course, with 10GbE and competent designers and limited budgets, sharing everything on one (or better yet two) wires makes a lot of sense.

Regards

John

acistmedical
9,021 Views

According to Best Practices from all, NetApp, Cisco and VMWare storage should be on its own net.

I have just implemented this in our office. Separate switches will give you ability to implement some nice features like jumbo frames, flow controll, etc, that may not be available if you want to put storage on an existing data network infrastructure.

Also, why do you need expensive 3750, we have used 2960S, they are stackable, cost much less and have 10Gb uplink ports that are connected to our FAS.

sanman2304
9,021 Views

Hi acistmedical, thank you for the information.  I've seen in NetApp best practice documentation that they state using a different network (subnet and VLAN), but I do not remember seeing written documentation about best practice for physical switches.

The Cisco 2960S model only recently came out, I think about April or May.  Some information I've read suggest it will replace the other layer 2 switches (2975 and 2350), because I believe these are only available in 48 ports.  So that's what lead us to the 3750E, which does have a higher level of performance.

Thanks.

thomas_glodde
9,021 Views

Usualy its about backplane bandwith. Unless you have two of these nice core builders with 2 x 40GBit+ backplane, you might overflow your poor switches. In standard server environments you might not have that much servers who saturate 1GBit links, storage can often easily saturate a plenty of 1GBit links tho.

Public