How Infrastructure as Code is used to automate operations in software delivery, allowing a shift from ITSM to DevOps.
In my previous article, we looked at several examples of technologies that have become software-defined, and determined that adoption demands a significant shift in how a business organizes its value stream. We examined at some of the key concepts underscoring software-defined technologies and how they reshape the enterprise. By applying those concepts we can anticipate changes and better position the business to react when software-defined technologies emerge.
In this two-part article we will look at Infrastructure as Code (IaC), an emerging set of automation tools and practices that enables infrastructure management through a software-defined layer.
This week we will examine the role of IaC in the software delivery lifecycle, and looking at how it enables new processes that enable a shift from classic ITSM to new DevOps strategies. Next week I will apply the core concepts of software-defined technology to understand the impacts of IaC on the business.
Applications
All applications run inside what we call an “environment” – a stack of hardware and software components built to support the application. This stack includes: networking, storage, virtual machines, operating systems, databases, libraries, dependencies, and the application itself. Building an environment requires many activities to bring up that stack, provisioning and configuring each component according to the requirements of the application.
All of this is done to serve the application, which is like a badly spoiled child – always demanding that things be “just so”, throwing tantrums at even the most seemingly insignificant departure from expectations.
The processes used to get an environment ‘just right’ (and keep it that way) have been the subject of much analysis and design over the years, becoming part of the body of work known as IT Service Management (ITSM). Recently however we have seen the development of a new set of tools and practices used to create and manage environments known as Infrastructure as Code (IaC).
Infrastructure as Code
Infrastructure as Code, also known as Programmable Infrastructure, involves the use of code and automation tools to perform the activities needed for building an environment. It replaces many of the processes involved in the deployment and ongoing management of the complete hardware-software environment in which an application will run.
While IT professionals have always used some automation such as scripting to help deploy environments, Infrastructure as Code is a recent development characterized by use of the following:
- Code – At the core of IaC is the code: definition files that declare the specification for each component of the environment and how it is configured. These files might be written in YAML or JSON, and will be checked into a version control system like Git.
- Automation tooling – Specialized tools read the definition files and use them to construct the environment and configure components according to specification.
- Application Programming Interfaces (APIs) – Automation tools perform the actions described in the definition files against APIs. Not only will the automation tools use APIs to provision and configure the components of the environment being managed, but the tool itself will be programmable through its own API.
The development of powerful automation tools, along with the widespread proliferation of APIs, has allowed Infrastructure as Code to emerge as a very effective means of turning IT Service Management strategies into DevOps processes.
Rather than working with GUI’s, scripts and command-line interfaces to perform actions, we are able to work with documents (code) that exhaustively describe the environment. These are easily shared, reviewed, and versioned.
With IaC, the actions at each step are executed, not performed, and are therefore much less prone to human error. Lets look more closely at what those steps are to understand how IaC is actually used.
Putting it to work
While setting up an environment requires a number of different components and services, we can group these into three distinct steps:
- Provisioning – The first step is to provision the foundational infrastructure systems – servers, networks, databases, storage. Provisioning tools perform this task, and are usually supplied by the infrastructure vendor. For example, Amazon provides CloudFormation to create VPCs (networks) and spin up EC2 instances (Servers), and, likewise, Azure gives us Resource Manager to create Network Security Groups and bring up Virtual Machines. There are also some provisioning tools like Terraform that are vendor agnostic, making switching between infrastructure vendors easier.
- Configuration – The second step is to configure the provisioned components, and Configuration Management tools accomplish this task. This is a broader set of tools used to perform operations like transferring files, installing services, configuring settings, and so on. There are many tools in this space, but the “Big Three” are Puppet, Chef, and Ansible. Each has its own advantages and disadvantages, however they all accomplish the same goal – configure the components with the required dependencies and settings.
- Deployment – The third step is to deploy the application. More and more this involves the use of container technologies like Docker. Container technologies are a recent advancement in technology that deserve their own article and explanation. For now, suffice it to say that a Container allows an application and its dependencies to be wrapped up into a package that is easy to deploy into its own isolated space on a machine. Containers provide an additional layer of abstraction from the provisioning and configuration.
The tooling landscape used to perform these steps is highly fragmented, and strategies often use an opinionated, best-of-breed approach with a different tool at each of the three steps. For example, a team might use CloudFormation to set up the virtual machines and connect them to the network, Chef to configure and secure the virtual machines, and Docker to load the application into an isolated container.
There is no “correct” way to set up your automation stack – this will depend on the limitations of the tools and the needs of the organization – but it should be understood that adopting this technology will invariably change the way the organization manages IT work.
Anticipating Change
IaC presents us with a large and increasingly complex software-defined layer that is used to perform infrastructure management functions. It is important to note, however, that what becomes software-defined here is not the infrastructure itself, although software-defined infrastructure is a pre-requisite for IaC.
What becomes software-defined with Infrastructure as Code are the systems and processes that are used to manage the infrastructure, such as asset management, change management, configuration management, and more. These functions can now be emulated in code.
This can create major challenges for traditional service management strategies – the skills, roles, responsibilities, methods and practices used to manage infrastructure change considerably. On the other hand, it creates great opportunities by providing a catalyst to launch DevOps initiatives and increase the scope of Agile practices across the value stream.
By understanding IaC as a software-defined technology, we can gain insight into the impact on the enterprise. In next week’s article we will examine IaC under the lens of software-definition, and look more closely at the challenges and opportunities of this driver of change and transformation in the software supply chain.