We have a NetApp FAS2020 as our SAN.
We are in the process of deploying our Hyper-V 3 node cluster. We've been umming an ahhing over what to do re the physical switches. At present we have 3x 48 port gig switches stacked together (using the backplane, not RJ45 interconnects) for our entire LAN (servers, pcs, printers etc). We have got 2 dedicated 24 port gig switches solely for SAN traffic to keep that seperate (Hyper-V nodes to SAN). Each node also has 2x dedicated gig NICs for this purpose.
However, we are not sure how to deploy the 2 dedicated switches for SAN; do we keep them entirely seperate from the other 3 switches and not stack them, or do we add them to the stack. The former has the disadvantage that the SAN won't be manageable from client PCs (only the Hyper-V nodes) and it won't be able to communicate with the internet (for auto support functions). The latter gives us these benefits but then may introduce performance problems as the traffic could bleed to the other switches.
Are there any recommendations as to this sort of deployment headache!
Which protocol are you going to use, iSCSI, NFS?
Can you stacj these 2 24 port switches, what model are they?
Take a look at Cisoc 2960S 24 port stackable, i just got 2 for exactly the same setup, except for vSphere, they support cross-switch etherchannel and have "out of band" management port which will enable you to plug it into your LAN stack for management.
These will be solely for iSCSI traffic.
They are Netgear GS724TS switches. Yes they can be stacked but is there any benefit in doing so compared to having them completely seperate with regards to MPIO purposes?