The Programmable Data Center

By Juan Orlandini, Principal Architect, Datalink

 

I recently had a conversation with a developer friend. He's been in there industry for a long time, but has never had an IT position. To him IT was always "just a provider of resources" and sometimes an "impediment." We got to talking about how things are changing in the IT world and how this might change how he leverages the resources that are available to him. Despite him being a very savvy techincal person, he surprised me by not really getting the "cloud transformation" that we are in the midst of. We ended up chatting for a while more before an analogy came to mind that helped him get it.

 

My friend started development in the late 70's and was taught by teachers and professors that grew up in the 60's. The landscape of computing resources and programmer expectations was much different then. It was expected for world class developers to understand not just the language they were developing on, but also the intricate details of the hardware they operated on. This was the only way that programs could be developed efficiently. Then things changed. Computing, memory, and storage resources became cheaper and cheaper.  Storage and memory saw Kilobytes turned into Megabytes into Gigabyte and so on. In turn, compute resources began to be measured in kiloflops, giagflops, and terraflops. All but the most demanding applications went from running on constrained environments to rarely leveraging the resources available to them. The development world saw this and started focusing more on programmer efficiency rather than an intense focus on resource efficiency. Highly sophisticated development languages, environments, and frameworks were developed that give today's programmers the ability to code world class applications in a fraction of the time it would have taken a couple of decades ago. That road was (and to be fair) is still rocky. Battles were fought over the language, developer methodology, platforms, and many other aspects. Regardless, few can argue that today's development isn't significantly more programmer friendly and productive than it was "back then."

 

IT is now in the midst of the same change. Until very recently, we managed our IT systems as if we were programming them in assembly language. IT administrators were expected to know the intricacies of all of their components to mind-numbing detail so that they could all be used to the highest efficiency. Well run shops knew all of the details of the servers, networks, storage and applications on those. You could ask the storage guys and they would know which exact track or tracks of disks held which data. The network folk would know all of the traffic, it's resource usage, and effect on the rest of the environment. The server guys understood all of the arcana of their operating systems and hardware and could tune each to amazing efficiencies.  But all of that is really hard. And it takes a very skilled person years to master all of these things. And it changes all of the time.

 

However, things are getting better. Each of the components of IT's environment (server, storage, and network) is going through a transformation. Rather than being siloed individual stacks of resources, each is becoming a maleable virtualized resource that is pooled into "virtual data centers." Even more interestingly, these resources are being integrated by vendors into cohesive pre-defined resource pools that can be programmatically allocated and de-allocated. The path to the virtualized cloud is being shaped by management frameworks that take the drudgery, error, and to many extents the human factors out of the equation. In essence we are creating a programming language of IT. As in programming today, there is still a call for specialists that know all of the details, but by and large much of that is being abstracted and made highly efficient. We aren't quite to the point where we can define all of the required elements in a single language (what we are calling "orchestration layer"), but we are getting closer every day.

 

With that analogy, my friend finally got it. The value is really in the efficiencies that we can extract out of higher order semantics. We don't have to have all of the details because the systems take care of that. This is all being enabled by some key technologies: server virtualization, storage virtualization, network virtualization, common APIs, and orchestration tool sets to integrate all of them. Organizations are being given the choice of building their own clouds (their own apps) or leveraging public clouds (off the shelf apps). The choice of which to use is similar to the software world. Few would develop their own word processor today, but many still build or customize their own CRM systems. These choices will mature and crystalize over the next few years.

 

At Datalink we are focusing on helping customers through this transformation. Come visit us at booth 2229 at VMworld or read more about this on our blog at http://blog.datalink.com.

 

 

Juan Orlandini is a Principal Architect at Datalink. He’s been in the IT industry for 25+ years and is responsible for working with customers on new data center architectures. He blogs about a number of topics at blog.datalink.com.