Back in the dim, dark past, much of the IT team's focus was to manage changes to IT resources. Some of this was around software (operating system and drivers), but mainly it was at the server level, to determine if any change to the hardware would cause problems up the rest of the stack.
To deal with this, IT staff embraced a range of approaches, from formalized processes to "fingers crossed." Some of those formalized methods were based on spreadsheets, which were only as good as the data that went into them. Others strove to be more process-based. The Information Technology Infrastructure Library (ITIL) established highly prescriptive rules around how to deal with any changes to an IT platform. Those who followed these processes to the letter generally enjoyed success; those who tried to manipulate it to better fit their needs generally did not.
Configuration management history shows that software was still largely left out of the equation, though. At that time there was little functional change in software; the focus was more about patching errors in large monolithic applications. Major upgrades introduced new functionality, with lots of testing prior to rollout to ensure that the changes would not cause complications. The best approaches were the relatively manual Revision Control System, or the Capability Maturity Model from the Software Engineering Institute.
ITIL also revealed that hardware configuration was only part of the picture. Software was becoming more dynamic, and organizations required faster changes to deal with competition and changes in the market. ITIL's architecture lacked the granularity to deal with rapid changes to software that overlaid the hardware for which it was designed.
Managing change within the software world was still pretty much manual. Changes were nominally documented in the code, so that developers could see what was done -- and why -- when they went back into the code to deal with a problem.
Unfortunately, many changes were undocumented or badly documented. Other approaches focusing on managing software versions and changes, such as Anderson Consulting's Foundation1, were cumbersome and proprietary. Some enterprises adopted project management methods, such as PRojects IN Controlled Environments (PRINCE2), to manage large software projects and control changes to them.
These big, heavily process-bound approaches demonstrated the need for software configuration management systems, which would free developers and operations staff from filling out forms to document everything about a change: the need for it, creation and testing approaches, and how to measure its results.
IT and software configuration management
Agile software management began to create a different approach to application development. Instead of large changes carried out over long periods of time and then rolled out (cascade development), developers could rapidly create and roll out small changes as sets of incremental functional changes. This manner became known as continuous development and continuous delivery. This also emphasized code reuse -- the use of libraries of functional code stubs -- which enabled developers to avoid the constant reinvention of the wheel that had plagued software development for decades.
These developments were a turning point in configuration management's history. Agile, in particular, changed the game. Developers could adjust code far more rapidly in response to the organization's needs. Agile's capabilities applied controls over how to implement and manage code changes. Still, IT seemed to be moving too fast for Agile to keep up.
Virtualization of the underlying hardware, first through the use of virtual machines and then through cloud-based platforms and containers, changed the very nature of software. Monolithic applications could break down into services, called via external systems, or be replaced with microservices that could be aggregated into composite apps. Changes in a single service could now impact many other services that call or respond to it.
Virtualization also has minimized and even eradicated software's physical resource constraints.
Software configuration management has matured rapidly. A raft of new vendors and open source systems are entering the market with capabilities to manage software through its lifecycle in organizations' modern environments.
Configuration management and DevOps
Alongside cloud and microservices, the biggest driver for configuration management is DevOps -- the support for streamlined processes to quickly move code from the development environment through to the operational platform, with suitable feedback loops.
DevOps builds upon the Agile approach to bring together the different teams involved with the software lifecycle. Feedback loops enable developers to more rapidly understand problems that arise in the operations space. Operations staff have a clearer view of the changes that come over from the development environment.
DevOps tools must manage software versioning and integrate into test and operations environments. Built-in automated feedback loops must support the ability to immediately remediate to a known position in the case of a failed provisioning into the operational environment, predicated on solid software configuration management. They should also enable engineers to manage an internal library of developed functions, as well as search external libraries for needed functions and services. Once integrations are made to external services, everything must be managed to ensure consistency.
Modern software configuration management: Infrastructure as code and beyond
Modern software configuration management must expand to cover more areas, such as managing platform-agnostic builds with additional libraries for containers and virtual machines. Configuration management systems must apply services such as patches and upgrades, but also provide the capabilities to package the services and composite applications for automated rollout. Full monitoring must quickly identify conflicts before they become problems. End-of-life management of all elements under control is also important. Software running in the operational environment must be shut down to free up resources, while old and unused functions, services and processes across a DevOps environment must be suitably archived to prevent wrong usage.
Here we find the latest turn in configuration management's evolution, toward the concept of infrastructure as code (IaC). Here, the availability and capabilities of underlying hardware are abstracted and managed via software. Containerized services, virtual machines and composite applications are all defined through software, and definition files dictate the resources applied at all stages of the workload's life.
We have come a long way since the early days of hardware-focused change management. Configuration management is now extremely complex, for environments where everything is increasingly run and managed through software. Software configuration management is not only the engine that powers this, it has become the system that manages itself. The final question is then -- is that a good idea?
There remains a strong need for human oversight. A good dashboard with sufficient data analytics can provide views into any potential problems, allowing for human intervention if necessary.
To an extent, configuration management uses rules-driven AI in orchestration. As we move to more powerful AI, the rules will change to meet the reality of what is happening across a platform. The feedback loops within the whole system can then be used not only to adapt how changes are created, provisioned and managed, but also to adapt how something like IaC operates. Human oversight will be required for a good period of time, but eventually, AI-powered orchestration (with software configuration management embedded in it) may well take over.