Fully automated DevOps pipelines are easier to put together than ever before, but full end-to-end automation may be more work than it's worth for some IT shops.
Automating the DevOps pipeline from application development on a laptop to deployment into a production data center environment has been challenging for even the largest enterprises, but it's now an attainable goal thanks to the spread of APIs and the growing popularity of highly customizable open source software.
The rise of representational state transfer (REST) APIs as a kind of lingua franca for interoperability between different automation tools is the main driver of progress in this area, according to Peter Richards, a recently retired financial services industry technical executive who has worked within the biggest banks in the world.
APIs are "not perfect yet, but now everybody's using REST, and that means it's so much easier" to connect software and services from different vendors, cloud providers and open-source communities together than it was a decade ago, when no one could agree on machine-manipulatable ways to describe applications, Richards said.
Adopting best-practice processes also has advantages when assembling automated toolchains, according to Don Luchini, senior software engineer at an energy management software company in the Northeast.
"There are lots of different build systems for code that just compile your code and upload it somewhere," he said. "But when you consider what we're doing with CI, every single project that we have in our portfolio needs to be able to build itself and needs to be able to test itself."
Software makers are rapidly developing tools for application development, continuous integration and continuous delivery, and infrastructure cluster management. Abstracting out generic tasks that must be done for every project allows IT shops to "much more rapidly take on some of these new tools."
Components and tools aligned more easily is all well and good, Richards noted, but there's still a fundamental discrepancy between the application development side of the pipeline and the data center infrastructure deployment side of things.
"In all honesty it's all the same thing," he said. "It's just that the approaches and language have not yet come together."
Some DevOps consultants say they don't even try to create end-to-end DevOps pipeline automation for this reason.
In some environments, it's better to "reintroduce some of the silos that were sometimes there for good reason," said Elliot Murphy, CEO of Kindly Ops LLC, a managed DevOps service based in Portland, Maine. "Trying to make a single person or handful of persons understand and excel at the entire stack is just an unreasonable order."
DevOps pipeline mileage may vary
At some technology companies, full-stack automation has been unquestionably the way to go, but those companies tend to be a bit ahead of the curve.
SimpliSafe Inc., a security system supplier in Boston, recently deployed end-to-end automation through a tool developed by "unicorn" poster child Netflix and Google engineers, called Spinnaker.
Spinnaker creates base machine images ("AMIs" in Amazon Web Services parlance) that include Linux packages which contain application artifacts as "hermetically sealed" units of deployment, said Barton Nicholls, DevOps engineer at SimpliSafe. With these images ready to be spun up as Auto-Scaling groups expand and shrink, application deployment can scale along with the cloud.
Spinnaker also allows granular controls over visibility into and management of the application deployment environment, a feature other, similar tools lack, Nicholls said. SimpliSafe also uses GoCD for its continuous integration process, which automatically kicks off Spinnaker jobs when tests are done. Finally, behind the scenes, AWS CloudFormation is used to provision infrastructure -- what Nicholls calls the "roads and roundabouts" that applications use.
"All of these tools have API endpoints, which is increasingly the method of choice," he said. "Increasingly, people working on tools are looking at trying to do one thing really well versus trying to do everything and going to the API model."
APIs, then, make it possible to build toolchains out of multiple "best of breed" products rather than trying to stretch one toolset to accommodate the entire DevOps pipeline.
While this works well in greenfield environments, it can be a challenge even for advanced companies to accommodate not only the application development and deployment pipeline for new apps in the cloud, but legacy applications and deployment methods in on-premises data centers into the same automated system.
At Concur Technologies Inc., a subsidiary of SAP, the cloud-based receipts management system is orchestrated using a combination of Kubernetes, Docker and HashiCorp's Terraform. Immutable infrastructures are stood up and torn down at will using containers as the basis for deployment.
But that's just one application of many. The company is still working to adapt Terraform to accommodate VMware-based virtualization infrastructure in addition to Kubernetes and Docker containers, according to Dale Ragan, senior software engineer for the Bellevue, Wash.-based company.
"Our biggest challenge is the hybrid model," he said. "It's the hardest part in working with our existing data centers to put in some of this new tooling."
It also coincides with the company's effort to break apart a huge application into multiple smaller services and realigning staff into end-to-end teams.
"Before, we were pretty separated as far as developers and operations," Ragan said. "We're starting to really take advantage of end to end teams, and then those end to end teams control their whole service from development to production."
The chief challenge here is changing culture and mindsets, but "we're getting there," he said.
Root cause analysis can be a DevOps pipeline snag
While connecting tools together using API endpoints is relatively straightforward, as DevOps pipelines grow more automated and complex, root cause analysis of issues are very difficult in certain scenarios -- such as the one encountered recently by Chris Moyer, vice president of technology with ACI Information Group, a web content aggregator based in New York, and a TechTarget contributor.
Moyer tried to push code into the continuous deployment pipeline but the job kept failing because GitHub was down and the pipeline couldn't connect to it.
"It took probably 10 or 15 failed builds before I got it to the point where it was actually telling me why it failed," Moyer said. Making sure developers and operations people are appropriately notified of issues when they occur isn't always straightforward with so many moving parts.
This is why Murphy increasingly advocates that organizations observe a "natural dividing line" between the host operating system and applications deployed using containers.
"If you're changing the revision of the packages that you have installed in Ubuntu because there's a kernel update, that is a concern at a different layer than updating the dependencies in your Rails application," Murphy said.
Like many concepts within DevOps, the usefulness of end-to-end pipelines is a matter of opinion and their applicability lies in the eye of the beholder. For organizations such as SimpliSafe, the up-front investment involved in creating an end-to-end pipeline makes for easier and more rapid app development down the road.
"Before you get there, for sure there's initial hard work to be done," Nicholls said. "But typically, they're much more stable and much more predictable, which down the line means a hell of a lot less work."
Security knits DevOps teams together
DevOps containers push past virtualization tools
Safeguard REST API endpoints for cloud apps
JFrog Xray puts a magnifying glass over deployed code