carloscastilla - Fotolia
Cloud and DevOps shops are experimenting with the infrastructure creation capabilities of Terraform. Best practices now ensure that the infrastructure-as-code tool is scalable and supported later.
HashiCorp Terraform builds IaC, with easy-to-deploy, repeatable and predictable network infrastructure. Terraform makes it possible to give a team member, such as a developer or tester, a copy of the network environment as it exists, rather than an approximation or mock-up.
Administrators who experiment with the IaC tool should learn Terraform features and capabilities on a small scale, then apply best practices to deploy it more widely in a streamlined and hassle-free manner. The following tips make using Terraform more efficient, effective and secure.
Administrators should plan how they will use Terraform code. Terraform, like most platforms, understands variables. Administrators get into the bad habit of throwing a variable into the current Terraform file, with the intention to clean it up and fix issues at a later date. Terraform best practices dictate that the user places variables in an external file. There's a range of variables specifics available in Terraform's documentation.
To properly implement variables, use TFVARS files, and specify them in the Terraform command line with the appropriate variable assignments. For example, the administrator should keep API keys and private keys separate from other code and secure.
The code is cleaner with variables management, and the administrator and developers have an organized repository of their Terraform variables.
Variables enable simpler versioning when using Terraform. It becomes simple to deploy different instance configurations, such as a set of files for quality assurance (QA), one for preproduction staging and one for production. These files are all versions of the same code with different variables.
Terraform best practices for security and stability
Variable management prevents Terraform users from accidentally disclosing API credentials in the source code file. It's just one way to make Terraform secure and stable for enterprise-wide use.
Terraform is only as secure as the administrator running it and the security policies enforced in IT operations. Keep private data in files with the appropriate security policies applied. If key credentials are uploaded to public cloud, use identity and access management policies, and turn on logging.
A DevOps environment always carries the danger of a team member overwriting the environment that you are currently deploying. Overwrites have undesirable consequences, from the awkward to the cataclysmic. To prevent disaster, use an appropriate access control mechanism, such as Amazon DynamoDB, as a lock table. It prevents two administrators from deploying or modifying the same code at the same time. Update management is a Terraform best practice for application stability and something to implement broadly when scaling DevOps tasks.
Terraform modules are self-contained configuration packages, prebuilt or created by the user. Administrators who learn Terraform might be tempted to rely on modules to excess. Modules are a good thing, but you can have too much of a good thing. The more modules you add to an infrastructure build, the slower and more complex the code, the more time it takes to execute and the more difficult it becomes to debug. Modules represent a balancing act for the administrator.
Modules must follow correct I/O methods, exposing the internal API rather than an alternative, less robust method of processing I/O. Deviating from Terraform best practices with modules can bite back when least expected. For example, a version upgrade of the tool could depreciate the method I/O in use.
Terraform isn't limited to modules. Advanced uses deploy a binary (executable) file to gather external facts. This capability opens up extensions. Learn to create extensions, and you can gather data for which there is no current Terraform module.
Additional recommendations for enterprise deployment
As you learn Terraform, apply these general and tool-specific practices to avoid failure and confusion at scale.
Backups are critical, yet many administrators fail to keep a copy of the system state stored remotely. It can reside in AWS Simple Storage Service, Microsoft Azure Blob storage or elsewhere. Backups are not difficult, and there is no reason not to do them.
Terraform files should be version-controlled using a source code management platform, such as Git. Version control might seem like overkill for tiny projects, but it's better to get in the practice from the start. As the projects grow larger, the team can fork, roll back and merge them as needed. This kind of flexibility is essential in complex, multiadministrator environments. It is also possible to store modules in Git as well so that the correct build version is pulled into a project on demand.
Some administrators also recommend using one directory per project. All the configuration files, modules and other required files are stored within the working folder. There is no need to traverse across long file system paths to pull down files. Complement this best practice by creating different TFVARS files for dev, production and QA, as mentioned earlier.
There are many ways to optimize implementation. As you learn and use Terraform more, always envision how a current practice will affect the way builds get composed in the future.