Automation is all the rage in IT organizations, with site reliability engineering and infrastructure as code permeating even fairly entrenched large enterprise IT workflows. As with any hyped technology, IT infrastructure automation’s cadre of supporters urges caution and critical thinking before issuing the rallying cry to script everything.
IT infrastructure automation proponents tend to think “If we just automated it because humans are erratic, it would work!,” said David Woods, who spoke at the 2016 Velocity conference in New York. But “that’s not how it works in an adaptive system,” he added. Woods is a professor, lead for the Initiative on Complexity in Natural, Social, and Engineered Systems, and co-director of Ohio State University’s Cognitive Systems Engineering Laboratory.
It’s a natural urge to remove human intervention from machines’ functioning, Woods acknowledged, but an operational stack doesn’t stay the same. Business goals evolve, and technologies constantly change as well: applications, operating systems, management tools and tool versions, as well as virtualization layers and infrastructure hardware. Yet, an automated process stays exactly the same. Because of this clash of adaptive and rigid, an IT organization must balance automation with changes to keep systems running. If everything was automated and left alone, IT could never advance.
“The question ‘When should I automate?’ is pretty fundamental in operations, development and broadly in industry,” said Joseph L. Hellerstein, who worked on IT automation with the IBM Thomas J. Watson Research Center in the early 2000s. He suggested that today’s IT and development teams ask a series of questions before they automate:
- How much change do I expect in the process that we are automating?
- What information must I collect and process to proceed with automation?
- When the automation fails, how difficult will it be to debug and repair? Who can do it?
- How much benefit is accrued from the automation, and how much will it cost in terms of time, resources and materials?
This self-evaluation prevents automation for automation’s sake, or misguided automation that actually adds costs to the project or to daily operations. What these queries shouldn’t do is keep teams from automating.
“Coming from a software engineering background, one high-leverage activity (an activity that will produce a noticeable and meaningful result) that I tend to do is to build tools that reduce manual, repetitive work,” wrote Edmond Lau in a blog post. Lau is the author of The Effective Engineer and an engineer working on user growth at Quip, a software provider in San Francisco. “I’m a little biased, but I’m a firm believer that everyone would benefit knowing a little bit about coding … Don’t do what a machine can do for you.”. Robust internal tool development signals that development and IT engineering teams are investing in productivity, he suggests, rather than going through the motions of daily troubleshooting and task performance.
“[IT infrastructure automation] doesn’t have to be this huge sophisticated push-button process like you see at the big companies,” said Matthias Rampke, production engineer at SoundCloud, a music streaming company based in Berlin, Germany. He refers to Internet giants such as Facebook, which is heavily involved with Chef and other automation and standardization tools. However, regular companies with script-savvy IT operations staffers can free up time by offering developers a portfolio of scripts for common IT tasks — what Rampke calls “usable automation.”
And finally, Shade as a Service, a humor account on Twitter, reminds us that “making mistakes is human — automating them is ops” and “automating a broken process does not count as an improvement.”
Meredith Courtemanche is a senior site editor in TechTarget’s Data Center and Virtualization group, with sites including SearchITOperations, SearchWindowsServer and SearchExchange. Find her work @DataCenterTT or email her at [email protected].
Editor’s note: Hellerstein is now senior data science fellow at the eScience Institute and affiliate professor in computer science and bioengineering at the University of Washington, Seattle. Research conducted in 2005 with IBM colleague Aaron Brown showed that 15% to 30% of automated software installs could have saved money if IT operations manually performed the tasks.