I have ordered the new 6240 with the IO expander and the whole bells and whistles. I have tried looking for answers on the community but really can't find any (I'm sure it's out there)
Our environment for our VMware will be using HP Blades with their Flex10 Virtual Connects. We have allotted 4GB bandwidth for the storage network. We are using NFS for our datastores.
So my question is, since we are only using 1 connection from the Blade to the Storage. will this change, i'm not sure yet. I also ordered a quad port GBE NIC for failover.
1. does it make sense to do a VIF for 2x10G connections? or should i do 1x10G and failover to 1 GBE NIC and then another 1x10GB and failover to 1 GBE NIC. But that means I will have to manage the connections from the vCenter to make sure that the connections are spread across the 10G connections. I hope I'm making sense.
2. is this a waste of pipe since only 4G will be coming in/out?
3. will it make sense to do some aliasing? Does anybody has experience on this with VMware NFS implementation?
4. how do i guarantee performance and availability?
Or am I overthinking this?
Sorry for all the questions, I am just trying to make sure that I get this designed and implemented for future growth, since we will have this storage for the next 3 years.
Congratulations on your new purchase, I'm envious... a new 6200 isn't in the cards for a while for us .
1. If you were using iSCSI, and wanted totally seperate physical networks for multipathing then yes, you could split them into two vif's using 1gb as failover. For NFS, I personally wouldn't (and we don't). You have to consider what would happen in the event of a failure where some system or set of systems is point to a vif with 1x 10g and 1gb for failover. You'd be effectively reducing your available bandwidth by 90%. There is the possibility that this could cause serious contention for that 1gb port, possibly to the point of almost making it unusable. If you have a vif with both 10g's active using IP balancing then you effectively get the redundancy of 2 ports, plug ~20gbit of available bandwidth sans overhead and imbalance. Worst case in the event of a port failure you'd have 50% the bandwidth (but in reality you're not going to be able to push 10 as you're limiting it to 4).
2. Is this a waste of pipe? Is this all thats pointing at the system? Yes and no. Its really a personal call. You have to be able to plan on expanding that 4gb later and having more bandwidth available now means less chance of downtime to reconfigure vifs or add cards in when you expand.
3. Aliasing... we tried this when we had all 1gig and no plans of 10gig. Yes, it does work to get you some additional bandwidth by using additional links (either on separate vifs or on multiple links inside a vif). But we ultimately found it to be more of a headache than it was realy worth with managing the different names, IPs, vifs, etc etc. Now with a 2 port 10gig vif, we have nothing to worry about, we just point everything at the 1 IP address and away it goes.
4. How much time,effort and money do you want to expend, lol? 2x 10g ports in a single vif buys you at worst ( in failover) 10g of bandwidth. All you're allocating is 4g for your storage traffic, shouldn't ever be a problem. Now, if you're worried about the switch that connects to this vifs failing, then yes, you would need two vifs, each plugged in to seperate switch (possibly all the way back to the blades) and with NFS you'd have to use aliasing.
Also, bear in mind that in the event of a total vif failure (for whatever reason) you can always fail over to the other filer.
If I wasn't clear on anything, please let me know.