The Filers and BladeCenter are each connected to the 6509 with gig uplinks. We run nearly all of our servers under vSphere using an NFS datastore off the Filer. Even with Exchange and SQL which are virtualized, the disks that house the database(s) and log files are NFS shares off the filer. Our Filer also hosts 3 CIFS shares
Next year we have an infrastructure upgrade so I am looking at:
- Replacing the 3020 with a FAS3200
- Replacing the Bladecenter with Cisco UCS chassis
A local reseller of Cisco UCS recently held an open house to talk about the product and see it in action. They had their storage (EMC) plugged directly into the UCS fabric interconnect using fibre channel, no fibre channel switch necessary. Two questions:
1. Can the FAS3200 be plugged directly into the UCS fabric withe a 10gig connection, not using fibre channel OR does the Netapp and Cisco UCS chassis each get plugged into my 6509 using 10gig connections?
2. I like NFS for my datastore for vSphere. With a 10gig connection for NFS, is there any reason to consider moving to fibre channel in the new setup? Maybe save on licensing costs for NFS?
You have a few options for connectivity once you move into the world of UCS. As of version 1.4 you have the ability to configure the Fabric Interconnect either in traditional uplink ports such as into the UCS chassis or to a northbound network device or SAN switch. You also have the option to configure direct attached FC/FCoE/NAS storage ports.
There is a compatibility matrix for supported direct attach storage vendors, however as you would expect, NetApp is on there.
In answer to your questions:
1. Yes it can be, if you are planning on only ever having access from servers that reside inside UCS then this would be possible with no external devices, just the NetApp & Fabric Interconnects. However you would need additional connectivity if you wanted to present say CIFS directly out to workstations etc. There are some limitations to bare in mind, as an appliance port the UCS Fabric Interconnect is acting like an end device so you would not get any VLAN tagging or other network specific functionality (as far as I am aware at this stage).
2. In my opinion NFS still out performs VMFS/LUNs for general purpose VM's. Depending on the application types, backup requirements, supportability etc, there will always be times where I would recommend a RDM or dedicated LUN. FCoE is always an option in this setup as well and should be considered for things like Boot from SAN or RDM's, you should avoid the usual FCoE pitfalls too which come with extending northbound from the Fabric Interconnects (it's a lossless, 1 hop protocol at the moment so you usually need a bridge point i.e. UCS chassis > Fabric Interconnect > MDS/Brocade/Other > Nexus 5k/Other > NetApp).
I hope this has proved useful, if you have any questions please don't hesitate to ask. However I will add just one caveat, it's nearly 4am here, I've been working for nearly 20 hours straight so I might have missed a few details. I'll take another read in the morning after some sleep and correct anything I got wrong. Hopefully my brain will function correctly and I'll dig out some links of some good reading material too.
Thanks for the great detail you put in your post. This is a complicated subject that I am just starting to wade in to. As I mentioned in my post, we have had an IBM Bladecenter for nearly 5 years and overall am happy with the product. I could certainly just purchase new blades with a lot more memory that could easily handle my current load (32 Windows VMs) plus a push into VMWare View next summer. That together with new 10GB switches that fit into the Bladecenter is one possibility.
I want to make sure I do due diligence by looking at all the other offerings. The Cicso UCS with hardware virtualization, quick consistent configuration, chassis level QOS, and the unique extended memory architecture make it worth looking at. Of course, it all has to fit in with what I am doing now (NFS for VMWare and CIFS shares) and where I am heading; replacing my aging 3020s with more modern 3200s, 10GB ethernet, and Cisco 6509 at my core.
If you have any other suggestions about areas I should investigate or questions I should ask, let me know
Sorry I haven't replied sooner, been a little busy.
I understand you wanting to do your homework before you jump into any new solution, I would do the same. If you would like we can discuss further what the benefits of UCS are and what you are likely to gain from that over a plain refresh of your existing estate.
If that is of interest to you then all I would need is a little more detail on your existing infrastructure, what your goals are for the coming years, expected growth and what your current usage is like.
We can either continue this here or if you would prefer you can email me directly. Personally I like to keep it open for all but not everyone is that open regarding their infrastructure.
I do not mind discussing my infrastructure, the devil is in the details so here we go.
Currenlty I have an IBM H chassis Bladecenter with 4 blades devoted to ESXi. These blades are 4 years old with older processors and only 16GB of RAM each. I have three other blades housing windows servers, two of them used for my student information system and one used by Commvault to index email that I archive. Commvault, which is also my backup software, does not support running their index product under VMWare so it runs on a physical blade. My student information system is a heavy hitter and while supported to run under VMWare, I am running low on memory in my ESX environment so I have not pulled these two servers in. Once I upgrade, these two servers will be virtualized leaving me nearly 100% virtualized in the data center.
My storage is a Netapp FAS3020 using an NFS share to house all 30 windows VMs. Most of the Windows VMs run server 2008 R2, by the end of the summer all of them will. This Netapp filer also house three CIFS shares
I have a Cisco 6509 at the core of my network with gig connections to my edge switches (Cisco 3560s) delivering 100Mbps to the desktop. In the back of my Bladecenter I have 6 older Cisco switches providing 6 gig connections to my core per vSphere best practice (2 nics for VM network, 2 nics for storage network, 2 nics for vmotion/service console). I have 4 gig connections from my main data closet to my DR site so I can do backup/replication. My Netapp filer has gig connections to my core.
Basically the servers, storage, and networking equipment was all put in at the same time 4 years ago. Next summer I can replace it all and am starting to work my way through the various choices. Out of the three pieces above the only one I have decided on is the Netapp. We are very happy with the product and will be just pulling out our controllers and putting in new ones, probably FAS31xx or FAS32xx series. The servers and networking I am open to looking at different solutions. For networking, I will be replacing all of my edge switches to get gig to the desktop and deploying 10 gig connections in three spots:
- Between storage and core
- Between servers and core
- Between main site and DR site
If all I need to do is house 30-40 VMs I could purchase new blades for my Bladecenter with more memory, upgrade my Netapp, upgrade my edge switches, and put 10 gig in some key spots. I will be making a big push into virtualized desktops using VMWare View which needs lots of resources. For this reason, I am looking not only at getting new blades for my Bladecenter but questioning whether I should be just replacing my Bladecenter with Cisco UCS.