Community

Automating Infrastructure as a Service (IaaS) for Readily Available, On-Premises Solutions

Building Flexibility into the Delivery Process

 

By Matt Brown, Senior Program Manager, NetApp on NetApp

 

This is the third blog in a four-part series on automation. Here I focus on the steps taken to build an adaptive and flexible delivery process to provide on-premises IT compute environments that are equal to readily available services in the marketplace. The fourth and final blog will focus on the technology used.

 

 

Two other blogs on this subject include:

Unpredictable chaos might be suitable for a reality television show, but not for an IT organization trying to provide predictive IT services delivery-especially those intended to be on par with readily available cloud services. As discussed in my first blog, NetApp IT streamlined and automated processes to accelerate the delivery of IT services through a self-service catalog. Today we can provision virtual and physical holistic compute environments (HCE) for our business units in less than four hours, and automation was a key factor in achieving this. Getting to this proactive, almost predictive, state was a journey and was not something we achieved overnight.

 

Our challenge was determining how to create a process flexible enough to respond to the ever-changing demands from the business for multiple technologies and configurations. We had a documented workflow process that had been in place for years. Yet upon closer look, we found the process had become fragmented.

 

Providing On-Premises IaaS Options

From the onset we recognized that it was impossible to continue delivering the medley of compute environments being requested by the business without eventually imploding upon ourselves. Equally unacceptable was a rigid, bureaucratic approach that handed down 'thou shalt' proclamations. We needed a flexible process for providing cloud-like delivery times while easily adapting to industry trends, technology changes, and business needs.

 

Similar to cloud adoption considerations, our HCEs must support a sound security/risk framework to minimize risk and ensure that IT solutions adhere to enterprise data governance, compliance, and privacy policies. Additionally, the platforms must enable IT to effectively manage performance and the technology life cycles. It was important that the technology offerings remain relevant, fresh, and current.

 

Remaining Adaptive and Relevant

To ensure that the process adapts and evolves to new technologies and trends, we use a new technology introduction (NTI) process to continuously update offerings in our catalogs. The NTI recommendations are managed by a small team who evaluate new technology in our labs. Findings from the labs are combined with industry trends, vendor directions, and technology life cycles to determine the best options for current and near-future business needs. Using this proactive approach means we stay ahead of the curve by evaluating products now for anticipated future business needs.

 

Selecting a Consulting Approach

In contrast to our cloud self-service offering for non-critical business applications, our on-premises solutions run critical business applications supported by a professional services engagement model.

 

As part of the engagement, our IT domain architects (DAs) consult with the business units on their requirements. In this way, they can fully understand which business issues are to be solved and why. From there, the DAs can define the technical specifications and deploy the correct solutions using our automated provisioning process.

 

Getting Started

We started by identifying those products that would meet 80% of our business needs while acknowledging that 20% of requests would remain as customized environments. For our catalogs, we selected Windows and Linux operating systems-and various flavors of each-based on their status as a readily available commodity and their capability to support an enterprise.

 

To build predictability into the process, we conducted a step-by-step analysis of the manual tasks and determined how each applied to the operating system (OS). We discovered that the steps were repeatable, but not predictable in the amount of time it took to deliver one OS over another. Automation of the tasks would be adjusted for the time differences as well as to remove any human errors and the downtime between handover of tasks from one person to another, especially when individual priorities conflicted. 

 

We then compiled a list of components necessary to deliver a holistic compute environment. The OS images were our first point of focus as it was the largest component to be automated. Once the OS was addressed, we reviewed the other elements that needed to be provided such as user access, server monitoring, and application storage space.

 

Highlighting our Results

Previously our ability to build and deliver on-premises solutions was limited to 75 per month with a 15-day service delivery window to the business. Today we provide unlimited automated builds in less than four hours-our only limitation is available inventory.

 

Initially we viewed automation as a way to improve our efficiency. However, the result has been a predictable delivery commitment that puts our on-premises solutions on par with readily available cloud services. Today, we maximize our delivery capabilities while providing both predictable delivery times and costs to the business.

 

 

 

 

The NetApp-on-NetApp blog series features advice from subject matter experts from NetApp IT who share their real-world experiences using NetApp’s industry-leading storage solutions to support business goals. Want to view learn more about the program? Visit www.NetAppIT.com.

 

Comments
on ‎2014-03-06 01:35 PM

The strategy you have outlined is something we are trying to accomplish over here as well.  Great write up!