By Jeremy Goodrum, VP of Engineering, Automation & Modernization, Wirestorm & A-Team Member
As a developer and automation architect, I’ve worked on many exciting automation projects over the last six years. One of my first automation projects required designing a self-service portal for Oracle administrators and developers to provision and manage the Oracle Test/Dev as a service on NetApp arrays. The result was a self-sustaining portal that ran for nearly four years before anyone had to touch the storage – and that was only because of the migration from 7-mode to ONTAP 9. Even now, when shelves and controllers are added, the automation instantly recognizes the space and handles the allocation.
Another exciting point in my career was when I started working with AWS and began to explore ONTAP Cloud. I jumped at the chance to play with a new framework and quickly realized some amazing benefits to having a single storage OS in the cloud. With ONTAP 9, I have the confidence that what I do for one type of NetApp array will then work for all NetApp arrays, and now that confidence extends to other providers, such as AWS and Azure. For the past two years, I’ve developed almost exclusively using ONTAP Cloud in AWS, and now I’m using Azure as well. Every bit of the code I’ve developed using ONTAP Cloud for AWS as the underlying storage just works, much to the satisfaction of my clients.
ONTAP Cloud is a software-defined virtual machine (VM) version of the NetApp ONTAP OS. The VM is deployed in AWS and/or Azure in a pay-as-you-go model. Being able to simply spin-up and spin-down my environments is a major selling point for me. I can create extremely elaborate scenarios to reproduce client configurations with the same features.
Over a year ago, a client requested the ability to do a full disaster failover and recovery for Windows File Shares (CIFS) from the East Coast to the West Coast. I did 100% of the development inside of my AWS account and created many “disaster-pairs” with a multitude of configurations. The benefit to me was that I could create a dozen of these systems and build many different configurations easily. Once I completed the code I shared it with the client, and it worked perfectly for every use case. When I finished the project, I simply deleted the ONTAP systems and moved on to my next project.
In the world of DevOps and code-defined architecture, a huge challenge has always been in the enterprise storage space. The ability to leverage ONTAP Cloud as my development platform and then deliver the code to run on physical systems is a huge benefit for both my clients and me. In the upcoming weeks, I’ll be starting a new series on a project I’m currently working on. In this case, I’m building a PowerShell-based Desired State Configuration (DSC) engine for doing declarative provisioning of NetApp. I bet you can’t guess what I’m using to do all the development…
Twitter - @virtpirate
Jeremy Goodrum [aka The Pirate] is an automation junky with over 15 years of development experience. He specializes in designing X-as-a-Service solutions including Software, Infrastructure, and Storage for major enterprise companies. Prior to starting Exosphere® Data, he was a Cloud Automation Architect for NetApp and one of the pioneers of NetApp OnCommand Workflow Automation (WFA). His code has been shared globally and established the standard for WFA automation packages, which he branded as “Pirate Packs.” Jeremy is also excited to represent the NetApp A-Team as a customer and partner.
Jeremy enjoys learning about new tech, writing awesome code, and helping unlock new automation opportunities, and has spent the last year mastering CHEF. He specializes in holistic solution designs for public and private clouds.
Jeremy lived for many years on a sailboat in the Bahamas and with a last name of Good-Rum, is it any wonder why he is known as “The Pirate?”