I'm hearing that IT infrastructure is dead. And who needs it anymore, really? The future is about moving up the stack to microservices and serverless technology, as we continue to abstract, embed and automate away all the complexities of explicit infrastructure layers, such as storage arrays and physical servers.
On-premises, Capex-style IT is shrinking, while rented and remotely managed hardware and cloud transformation set new standards for modern IT. All the cool kids use end-to-end orchestration, advanced machine learning, real-time management data streams, microservices architecture and insanely scalable container environments. And now we even have serverless computing, sometimes called function as a service (FaaS).
But can we have computing without the server? And where did the server go?
Serving more with serverless technology
There is a certain hazard in my life that comes from telling non-IT people that, as an IT industry analyst, I explore and explain technology. I'm asked all the time, even by my mom, questions like, "I suppose you can explain what the cloud is?"
I tend to bravely charge in, and, after a lot of at-bats with this question, I've got the first 25 seconds down: "It's like running all your favorite applications and storing all your data on somebody else's servers that run somewhere else -- you just rent it while you use it." Then I lose them with whatever I say next, usually something about the internet and virtualization.
The same is mostly true with serverless computing. We are just moving one more level up the IT stack. Of course, there is always a server down in the stack somewhere, but you don't need to care about it anymore. With serverless technology in the stack, you pay for someone else to provide and operate the servers for you.
We submit our code (functions) to the service, which executes it for us according to whatever event triggers we set. As clients, we don't have to deal with machine instances, storage, execution management, scalability or any other lower-level infrastructure concerns.
Look ma, no server!
We need to tackle several issues to ready serverless technology for primetime enterprise use. The first is controlling complexity. Deploying one or two event-triggered functions can be handy as a type of application integration super glue. Still, once you start down that road, you inevitably keep piling on functions that can each have relationships and dependencies on other functions. The total number of potential interactions can grow exponentially large.
Managing this tangled web of workflows among microservices and event-triggered functions can get burdensome quickly. But there is interesting work going on to address these problems. Fission, an open source FaaS layer that deploys on top of Kubernetes, is now working on Fission Workflows, which implements a YAML-like blueprint file to declare and define how individual functions can and will operate together in larger workflows.
Like building blocks adding up to a larger structure, these new workflow definitions will naturally correspond to interesting application transactions. I expect we will be able to make use of these workflow definitions, probably with some clever AI/machine learning analytics, to help with complex workflow planning, deployment, scaling, monitoring and management.
Another challenge of serverless technology is troubleshooting when things don't work well or as planned. The serverless provider could give some limited monitoring visibility, but detailed management will require understanding which functional workflows were triggered, in what order and with what resulting performance.
We want to know about the critical performance path for any given application-level operation and where that path happened to run across our brave new hybrid, multi-cloud environment. Was there a resource constraint, an architectural bottleneck, a software design limitation or an unexpected use? With a resource issue, we might need to peer into the supporting infrastructure to look at configurations and capacities, and perhaps examine what else might have competed for those resources.
Consider the huge scale of some of today's aggressive microservice-based containerized apps. It's not feasible to conduct practical performance management troubleshooting or tuning with only aggregate statistics.
One way to track and analyze actual execution paths is via code-level instrumentation. The OpenTracing project at opentracing.io offers a standard for transaction tracing in the potentially large and dynamic microservices realm. Of course, this is not an easy problem to solve at the huge scales of production microservices execution with DIY approaches. There are performance vendors emerging that offer production-quality managed services that can trace 100% of all microservices transactions at any scale of execution.
Expect to see much of the microservices performance-management functionality extended and applied against serverless technology as well. Fission, for example, fires off each triggered function into its own container as an internal microservices execution. Still, even with tracing information, you have to know what you are looking at. Plus, it's still only one step in figuring out where a given transaction actually executed.
In my mind, exploring all the data at all layers of an IT stack might be akin to traveling through the billions and trillions of stars in a virtual reality space simulation. Performance-management operations could get quite immersive. In any case, it's good to remember that no matter which abstraction layer you are working at today, there's still a server down there somewhere.
- The Eight Steps to Cloud-Native Applications –Red Hat
- The Eight Steps to Cloud-Native Applications –Red Hat
- The Path to Cloud-Native Applications –Red Hat
- Comparing Two Schools of Application Development: Traditional vs. Cloud-Native –IBM