Modern Stack

Insight on building and supporting cloud apps

Fotolia

Serverless technology obfuscates workflows, performance data

Serverless and microservices reshape the application stack into something that looks like a swath of stars in the sky. How do you find a slow, misconfigured component in this interconnected galaxy?

I'm hearing that IT infrastructure is dead. And who needs it anymore, really? The future is about moving up the stack to microservices and serverless technology, as we continue to abstract, embed and automate away all the complexities explicit infrastructure layers, such as storage arrays and physical servers.

On-premises, Capex-style IT is shrinking, while rented and remotely managed hardware and cloud transformation set new standards for modern IT. All the cool kids use end-to-end orchestration, advanced machine learning, real-time management data streams, microservices architecture and insanely scalable container environments. And now we even have serverless computing, sometimes called function as a service (FaaS).

But can we have computing without the server? And where did the server go?

Serving more with serverless technology

There is a certain hazard in my life that comes from telling non-IT people that, as an IT industry analyst, I explore and explain technology. I'm asked all the time, even by my mom, questions like, "I suppose you can explain what the cloud is?"

I tend to bravely charge in, and, after a lot at-bats with this question, I've got the 25 seconds down: "It's like running all your favorite applications and storing all your data on somebody else's servers that run somewhere else -- you just rent it while you use it." Then I lose them with whatever I say next, usually something about the internet and virtualization.

The same is mostly true with serverless computing. We are just moving one more level up the IT stack. course, there is always a server down in the stack somewhere, but you don't need to care about it anymore. With serverless technology in the stack, you pay for someone else to provide and operate the servers for you.

We submit our code (functions) to the service, which executes it for us according to whatever event triggers we set. As clients, we don't have to deal with machine instances, storage, execution management, scalability or any other lower-level infrastructure concerns.

The event-driven part is a bit like how stored procedures acted in old databases, or the way modern webpages call in JavaScript functions, hooked to and fired off in response to various clicks and other web events. In fact, AWS Lambda, a popular serverless computing service, executes client JavaScript functions, likely running Node.js in the background in some vastly scalable way.

Look ma, no server!

We need to tackle several issues to ready serverless technology for primetime enterprise use. The is controlling complexity. Deploying one or two event-triggered functions can be handy as a type application integration super glue. Still, once you start down that road, you inevitably keep piling on functions that can each have relationships and dependencies on other functions. The total number potential interactions can grow exponentially large.

Managing this tangled web workflows among microservices and event-triggered functions can get burdensome quickly. But there is interesting work going on to address these problems. Fission, an open source FaaS layer that deploys on top Kubernetes, is now working on Fission Workflows, which implements a YAML-like blueprint file to declare and define how individual functions can and will operate together in larger workflows.

Like building blocks adding up to a larger structure, these new workflow definitions will naturally correspond to interesting application transactions. I expect we will be able to make use these workflow definitions, probably with some clever AI/machine learning analytics, to help with complex workflow planning, deployment, scaling, monitoring and management.

Another challenge serverless technology is troubleshooting when things don't work well or as planned. The serverless provider could give some limited monitoring visibility, but detailed management will require understanding which functional workflows were triggered, in what order and with what resulting performance.

We want to know about the critical performance path for any given application-level operation and where that path happened to run across our brave new hybrid, multi-cloud environment. Was there a resource constraint, an architectural bottleneck, a software design limitation or an unexpected use? With a resource issue, we might need to peer into the supporting infrastructure to look at configurations and capacities, and perhaps examine what else might have competed for those resources.

In my mind, exploring all the data at all layers of an IT stack might be akin to traveling through the billions and trillions of stars in a virtual reality space simulation.

Consider the huge scale of some of today's aggressive microservice-based containerized apps. It's not feasible to conduct practical performance management troubleshooting or tuning with only aggregate statistics.

One way to track and analyze actual execution paths is via code-level instrumentation. The OpenTracing project at opentracing.io offers a standard for transaction tracing in the potentially large and dynamic microservices realm. Of course, this is not an easy problem to solve at the huge scales of production microservices execution with DIY approaches. There are performance vendors emerging that offer production-quality managed services that can trace 100% of all microservices transactions at any scale of execution.

Expect to see much of the microservices performance-management functionality extended and applied against serverless technology as well. Fission, for example, fires off each triggered function into its own container as an internal microservices execution. Still, even with tracing information, you have to know what you are looking at. Plus, it's still only one step in figuring out where a given transaction actually executed.

In my mind, exploring all the data at all layers of an IT stack might be akin to traveling through the billions and trillions of stars in a virtual reality space simulation. Performance-management operations could get quite immersive. In any case, it's good to remember that no matter which abstraction layer you are working at today, there's still a server down there somewhere.

Article 5 of 6

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close