olly - Fotolia
Infrastructure as code builds on software-defined technologies to bring more speed and consistency to IT infrastructure provisioning and configuration. While IaC enables an enterprise to quickly invoke infrastructure to support an application deployment at almost any scale, it relies on code -- which means testing is essential.
Use these strategies to validate an IaC deployment before it goes live.
Infrastructure-as-code testing strategies
IaC does not typically involve a traditional programming language, such as C++ or Java. Instead, IaC usually relies on more straightforward coding that uses a description language -- often in human-readable JSON format -- to delineate the desired environment and configuration. When developers or administrators execute that code, an IaC-capable platform parses the description file, and uses automation and orchestration to perform the associated provisioning and configuration tasks. Many systems configuration platforms, such as Terraform, Chef, Puppet, SaltStack, Ansible, Juju and AWS CloudFormation, possess IaC functionality.
As with any software instructions, thoroughly test IaC description files before production use. Testing is a vital opportunity to check the file for errors and oversights, and to gauge how an IaC deployment affects the IT environment. For example, if the file implements a new or different configuration, validate the new configuration for proper operation, performance, security and compliance.
Testing also enables IT staff to experiment with special or edge cases and build confidence in the IaC codebase, which often consists of diverse files that perform specific tasks. Approach infrastructure-as-code testing with the same tactics and strategies as any other software project. Examples are listed below, though it won't be necessary to perform all these tests for every IaC file change.
Static or style checks. One of the easiest ways to check an IaC file is simply to read it and verify that the file's contents meet the established criteria for readability, format, variable names and commenting. This kind of style checking does not validate the code's functionality, but provides a useful sanity check to confirm it meets fundamental style and quality requirements.
Perform these checks manually, or use static code analysis, lint testing or similar tools to automate the code review. The choice of tool usually depends on the language or platform on which the IaC file runs. For example, a tool such as RuboCop checks whether Ruby code meets the criteria outlined in the Ruby Style Guide, while StyleCop, an open source static code analysis tool, checks C# code for conformance to Microsoft's .NET Framework Design Guidelines.
Unit tests. Beyond an assessment of format and style, validate the functionality of each IaC file. Unit testing is often the simplest and fastest approach. As with most software, IaC is rarely a single file, but an extensive series of files that serve as units or components. Each unit performs a specific job, and developers use tools within an IaC-type platform -- such as a Chef cookbook or an Ansible playbook -- to string the units together and invoke a complete provisioning and configuration process. When an IaC unit file is created or changed, execute that specific unit file alone in a test environment to validate proper operation.
This focused form of testing enables teams to isolate the cause-and-effect of any defect for a specific unit. However, a unit testing environment typically does not reflect production, so the results do not reveal how the unit file will interact with other IaC files or the overall workflow.
System tests. Once an IaC file passes individual unit testing, put it into the broader process with other IaC files to validate the complete workflow. System testing is often far more involved than unit testing, because developers must perform tests on every workflow that involves the unit file.
To ensure that results match expectations in realistic situations, run system tests on production systems, though don't touch live data or integrate with live production applications. In some cases, IT administrators might provision staging infrastructure on which they can perform infrastructure-as-code testing in relative isolation. System tests also gather basic provisioning and configuration performance metrics to measure any effects of IaC file and workflow changes.
Integration tests. In typical software development and testing scenarios, integration tests are the most involved and comprehensive form of prerelease assessment, and include the most sophisticated attributes of a deployment, such as backup, high availability and failover. In IaC coding environments, integration tests generally simulate the creation of full workflow deployments -- they might provision and configure infrastructure, deploy and configure an application, and connect that application to others within the environment. The goal is to ensure that the complete environment behaves as anticipated.
In effect, integration testing is as close as developers can get to live production. Testing at this level is time-consuming and might last for days or even weeks.
Blue/green tests. Also known as A/B testing, this type of beta test places the new (green) instance into production, alongside the old (blue) instance. End users can run either instance, or they are grouped and redirected to the appropriate instance. This is a common practice for enterprise applications, as it enables live testing and validation for an updated application, yet maintains the current version for support and, if needed, rollback.
It is unusual for IaC workflows to use blue/green testing. In most cases, if the updated workflow deploys and functions as expected in high-level integration tests -- or even in system testing -- there is little reason to beta test it.
The importance of monitoring in IaC
Monitoring plays an important role in infrastructure-as-code testing. While application monitoring emphasizes performance and business-centric KPIs, IaC monitoring is primarily concerned with reports, alerts and logs.
When an IaC platform such as Chef, Puppet or Ansible provisions infrastructure and performs a deployment, it also produces logs that track activity and report errors. These logs form the foundation of troubleshooting and auditing, and ensure the environment works properly and adheres to established business standards.
Infrastructure-as-code testing might rely on some performance monitoring to ensure that the provisioned and configured infrastructure -- and the workload deployed within that infrastructure -- performs within acceptable levels. From an IaC perspective, performance monitoring is an indirect assessment of the provisioned infrastructure. For example, if an IaC file changes the provisioning or configuration of a network connection, performance testing can measure the effect of those changes.