Automating Infrastructure as a Service (IaaS) for Readily Available On-Premises Solutions

Six Technology Components Essential for IaaS Automation

By Matt Brown, Senior Program Manager, NetApp on NetApp


This is the fourth and final blog in a series on automation. Here I focus on the technology used in automating the delivery of our on-premises IT compute environments so that they are equal to readily available services in the market.


Read the first three blogs in this series:

  1. Introduction to Automating IaaS for Readily Available, On-Premises Solutions
  2. The Rewards of Engaging your People in the Automation Process
  3. Building Flexibility into the IaaS Delivery Process

With the availability and proliferation of external IT cloud services, the expectations on corporate IT are now higher than ever before. For IT to remain relevant at NetApp, we needed to deliver a catalog of standard, automated IT services at a cost, quality, and speed comparable to the best external cloud providers.


To address this, NetApp IT streamlined and automated its delivery processes to provision virtual and physical holistic compute environments (HCEs) to our business units in a few hours. Each HCE includes the operating system, access controls, server monitoring, and applications storage space for the required platform and business application. These combined elements needed to be delivered in a predictable, repeatable way. 


With automation, we can coordinate processes across different IT functions and make the initial delivery and ongoing support of the HCEs easier to manage. Six technology components enabled us to automate the delivery of our on-premises environments.


1. User Interface. Our service catalog is built to leverage IT service management capabilities from an existing Software-as-a-Service solution. Since this solution was already being used internally – IT operations, the Help Desk, and our business users – everyone was familiar and comfortable with it. Our service catalog provides IT service-line owners with the ability to build virtual systems by inputting location, usage, and application platform. The catalog then auto-generates host names and volumes along with making updates to our Configuration Management Database.


2. Compute. As mentioned in my previous blog, we identified products that would meet 80 percent of our business needs. We selected Windows and Linux operating systems because of their enterprise support capability and commercial availability. The NetApp FlexPod solution portfolio is deployed in the data center because of its modular and scalable data center architecture. The portfolio is comprised of computing hardware, virtualization support, switching fabric, and NetApp storage systems software.


3. Network. Our core network infrastructure is standardized on 10Gb network switches throughout the data centers and is based on a provisioning-on-demand (POD) concept. The POD design leverages a modular and scalable design for servers and storage in an edge-core-edge network topology we termed micro-zones. For each micro-zone we systematically created detailed technical designs and standards for every aspect of the IT equipment, from the physical provisioning to the logical deployment of virtual machines.


4. Storage. NetApp clustered Data ONTAP allows us to virtualize very large pools of storage into a logical container and non-disruptively move data without any impact to applications. Our modular design also allows us to build storage nodes with different performance or capacity service levels. Since we have the flexibility and adaptability to move data anywhere in the logical cluster, we can thin-provision all storage volumes and run our storage systems at a higher utilization rate with less risk from unexpected performance or capacity growth. By leveraging our standard designs, we created serialized processes and naming standards to programmatically automate the provisioning of storage. Combined with Cisco Unified Computing Systems (UCS) architecture, we can now provision both virtual and physical machines automatically.  


5. Workflow. Scripts were created for the routine tasks associated with the HCE delivery process using orchestration tools already in-house and familiar to the staff. NetApp OnCommand Workflow Automation (WFA) is used for storage provisioning, and virtualization software is used for provisioning virtual environments. This standardization, virtualization, and automation allowed us to reduce our HCE delivery times from five days to less than one day.

6. User Access. For automating the management of our infrastructure, including servers and network devices, we integrated user access with our existing identity-management solutions. Doing this allows us to assign new servers to a server group and provide that group user access, ensuring security and compliance while enabling the appropriate access for administrators and users to manage physical-to-virtual relationships and their respective compliance statuses.  


Our automation efforts have greatly reduced the amount of time it takes to provision new compute environments from weeks to just a few hours. The impact to the business is tremendous – NetApp IT now provides infrastructure services on-demand with a catalog of standard, automated IT services at a cost, quality, and speed comparable to the external cloud alternatives.


Click on the following links to read the first three blogs in the series, “Automating Infrastructure as a Service (IaaS) for Readily Available On-Premises Solutions.”

  1. Introduction to Automating IaaS for Readily Available, On-Premises Solutions
  2. The Rewards of Engaging your People in the Automation Process
  3. Building Flexibility into the IaaS Delivery Process


The NetApp-on-NetApp blog series features advice from subject matter experts from NetApp IT who share their real-world experiences using NetApp’s industry-leading storage solutions to support business goals. Want to view learn more about the program? Visit