Automation is central to DevOps, and what seemed fantastic 20 years ago is reality today. Looking beyond DevOps testing to the big picture, what does machine automation and autonomy actually portend?
Back in 2000, Bill Joy, co-founder and then chief scientist at Sun Microsystems, wrote an article for Wired titled, "Why the Future Doesn't Need Us." At the time, Sun Microsystems was a computing powerhouse. Its machines were a prevalent part of the internet ecosystem and its code was everywhere. Sun Microsystems gave the world Java programming.
Sun and Joy were anything but trivial, and more than a few eyebrows were raised when Joy made the following assertion:
As society and the problems that face it become more and more complex and machines become more and more intelligent, people will let machines make more of their decisions for them, simply because machine-made decisions will bring better results than man-made ones. Eventually a stage may be reached at which the decisions necessary to keep the system running will be so complex that human beings will be incapable of making them intelligently. At that stage the machines will be in effective control. People won't be able to just turn the machines off, because they will be so dependent on them that turning them off would amount to suicide.
Fast forward to 2016, where noted neuroscientist and philosopher Sam Harris gave a Ted Talk in which he stated:
One of the things that worries me most about the development of [artificial intelligence] at this point is that we seem to be unable to marshal an appropriate response to the dangers that lie ahead.
Harris pointed out in the Ted Talk that we are on the verge of what the mathematician I. J. Good called the intelligence explosion; that point in time at which machines, being able to process information faster and more accurately than was previously possible by humans, will create superintelligence. Combine Joy's assertion with Harris', and we'll eventually be completely dependent on machines that are smarter than us and can act faster than us.
Machines might remove automate us humans en route to efficiency
These machines will be autonomous. Their self-direction will not be guided by an imperative for self-preservation and propagation, but rather to do what they've been created to do, in the most efficient way possible. And, if a human gets in the way, the human must be removed. Removing humans who block the critical path will not be an emotional action. Rather, it'll be a rational one.
Harris used ants as an analogy. Humans don't hate ants, and most don't wish them harm. But if ants are crawling on food or getting in the way of erecting the building, we just eliminate them. Again, we don't hate the ants. Removing them just becomes a matter of fact. Now, take this scenario and substitute machine intelligence for humans and humans for ants.
We already have machines built to act autonomously, like programs trading on the various stock exchanges. We've written computer programs that can make trades in milliseconds, faster than any human. These automatic trading systems have produced unexpected failures demonstrating that automated behavior is far from perfect, but the systems have been corrected with "kill switches" to stop when things get out of hand. These kill switches were created by humans and humans upgraded existing software to implement the kill switches. In a sense, humans were the ultimate exception handlers.
But as any of us who've created code know, exception handling is a form of information processing, and machines can be taught to handle exceptions. As machine intelligence becomes more general, it's entirely possible for these machines to become better at identifying system exceptions and providing automated remedy. As far as having machines deploy fixes into a system, we've now figured out DevOps automation and how automated systems upgrade. Once again, that which 20 years ago seemed fantastic feels quite usual today.
So, what is the essential concern?
Well the concern is not DevOps automation and artificial intelligence in general. That genie has been let out of the bottle and there is no way it's going back in. For now, we're all doing better because of it. The benefits experienced in education, finance and manufacturing alone prove testament of the worth of the technology.
The essential concern relevant to the intelligence explosion lurking on the horizon is that we've yet to come up with a policy that can be implemented today that'll address the dangers machine automation will present in the short- and long-term future.
In other words, those of us who make automation technology, including DevOps automation, for a living have no guidelines to follow that'll prevent independent, autonomous automation from acting in a logically destructive manner. We're making this stuff with no concern or understanding of the potential consequences.
Meet James, your new, virtual partner-in-QA
The brainchild of cloud services provider DinCloud, James, is ready to join your quality assurance (QA) team. And he's a robot. DevOps automation remains a fertile space to explore new technologies and processes. But the idea of a service like James -- only $5 an hour -- is likely to send at least a shiver down the QA industry's collective spine.
It's like 1945 when scientists at Los Alamos were working with uranium without a shred of understanding about the dangers on the landscape. Did we know the dangers associated with handling radioactive material? Did they understand the long-term issues around storage or system failure? Any danger -- perceived or real -- was thought to be a problem of the future, if indeed the problem ever presented itself at all.
Benefits of machine automation still outweigh the risks
Please know I am no Luddite. I've enjoyed the fruits of technology all my life. That I've made some modest contribution to the profession of software development is probably one of the great joys of my work. And I am in no hurry to go back to the days when there was no internet. The benefits far outweigh the liabilities. But I, too, realize that we've never been in a place in history in which machines have been this smart. The steam engine was but a tool -- a very powerful tool. For many, Siri and Alexa are part of the family. To paraphrase a popular saying: If it behaves as if it's thinking, it's probably thinking.
We've made thinking machines. But these machines have neither emotions nor morals. Emotions and morals are human qualities. Intelligent machines do what they've been created to do: to apply logic to goals to achieve a desired outcome. As these machines acquire more data, they'll become smarter. They are already increasing intelligence at accelerating rates.
Without are set of constraints in place that govern how we create machine behavior, we run the risk of planting the seeds of our own destruction. Many call the warnings of Joy, Harris and Tuck the stuff of science fiction. Science fiction? As those of us who do DevOps automation work day in and day out have learned, over time, the distance between science fiction and science reality gets shorter every day.