carloscastilla - Fotolia

IDC analyst: Serverless tech will change the future of IT ops

IDC's Mary Johnston Turner discusses how the development and increasing maturity of serverless technology and AI will upend how IT operates.

Serverless technology frees IT admins or developers to merely write an initial script and feed it to their serverless technology of choice. The cloud service will map this function to an appropriate API endpoint and scale on demand.

But serverless technologies also carry a certain number of risks. For example, the lack of responsibility over the underlying infrastructure is a key component of this technology. And while this is convenient for many organizations, it also raises security and compliance concerns, particularly for larger enterprises, or those that operate in heavily regulated industries, such as healthcare and finance.

SearchITOperations spoke with Mary Johnston Turner, research vice president of future of digital infrastructure at IDC, to discuss the developing effect of serverless technology on the IT ops status quo.

Editor's note: Answers have been edited for clarity and brevity.

In a recent IDC presentation, you described how serverless technology is changing the IT ops landscape. Let's start there.

Mary Johnston Turner: The complexity of infrastructure operations is rising much more quickly than than traditional operational workflows, processes and staffing levels can support.

There's a lot of innovation right now in terms of automation, observability powered by AI and ML [machine learning], analytics -- and about trying to use those analytics to drive the automation. There's also, in parallel, a set of activities that aim to simplify the way developers and admins … work with the infrastructure services. And that's really where serverless comes in.

Mary Johnston Turner, research vice president of cloud management at IDCMary Johnston Turner

The internal DevOps and IT Ops teams are offloading a lot of the basic, repetitive configuration provisioning and scaling activities for that infrastructure, and giving it to the service provider [to manage].

So how does serverless fit into the evolution of IT operations?

Johnston Turner: I see this as sort of the ongoing lifecycle of infrastructure operations.

Serverless has a specific automation use case: It works very well for environments that are tightly defined, and that you can distill the operational activity for a config provision, as well as scale down to a very fine-grained set of instructions. Then you can benefit from applying them very broadly.

And a lot of service providers are also tying their billing mechanisms to the fine-grained units of compute consumption tracked by these serverless activities. … You start up the resource, you may only run it for a fraction of a second. … You're only going to get charged for that period of time.

Many organizations are interested in how these serverless-style offerings can continue to help them grapple with the operational complexity -- and particularly the scale of operational complexity as they move to more cloud-native environments.

the expected roles of serverless in multi-cloud management

How do organizations take advantage of serverless technology?

Johnston Turner: Not every workload is going to be suitable for a serverless environment. But even if these organizations can offload 10%, 15%, 20% of their really basic infrastructure … that's going to give them resources they can deploy elsewhere. … [Eventually] serverless technologies are going to be mature enough that enterprises [must look] at them as one of many options for how they optimize where they apply their internal people skills, versus where they partner with a service provider.

What are some things about serverless that make people -- or IT organizations -- nervous?

Johnston Turner: If you're talking to a very traditional sort of IT ops organization that ... isn't comfortable with rapid, large scale automation, in general, yeah, they're going to be very suspicious.

And I think a lot of [the hesitation] has to do with being in a position to assess the needs of the workloads, and where there are benefits and where there aren't.

The risk is always that you are giving out some amount of control. And particularly, I think, heavily regulated organizations may have some concerns about compliance, change control and audit trail.

Any time you move into really high speed, automated operational environments, there's concern that if something goes wrong, it will be hard to detect and recover.

Those are legitimate concerns that often come back to just how mature the organization is around their operational processes. ... Powerful services like this have to be treated with some respect and maturity.

Do you think that, as serverless continues to mature, it will ultimately apply to a broader range of workload types?

In many ways, serverless is just another layer of automated control that, eventually, customers may not even really see.

Johnston Turner: Their role will probably continue to increase, but it will still be pretty specific.

Functions as a service, I think will continue to grow -- think of things like [AWS] Lambda, for example. And there are a lot of functions and services in business process automation. As more applications become cloud native, or other modern architecture structures, they will have a greater percentage of workloads that can take advantage of functions and serverless.

But it's still always going to be dictated by the needs of the workloads. And I don't ever see a day where … all infrastructure ops is serverless -- there are too many specialized use cases.

In many ways, serverless is just another layer of automated control that, eventually, customers may not even really see.

I think what will ultimately happen is [that] we won't even worry about "serverless." [Instead,] we'll … go to our cloud management console, and we'll say, "Here's the kind of resources we want. Here's the performance and the security levels. Here's who has access to it. Go." And the services underneath will take care of determining whether serverless or a function -- or some other automation or orchestration tool, like Kubernetes -- is most appropriate [for the task].

How can that lack of control work for an organization?

Johnston Turner: That's where the analytics and AI and ML pieces come in. Over time, being able to use advanced AI and ML to monitor and predict the workload and application performance requirements, and [to] detect anomalies and proactively scale. If that level works, then a lot of this underpinning becomes just part of the plumbing.

Will the growth of serverless, AI, machine learning, etc. widen the skills gap? Or will it ease internal management and help close that gap a bit?

Johnston Turner: Governance, policy, security, compliance and change control: That's where the skills emphasis is going to have to shift, as we're able to automate more of the blocking and tackling.

Operations teams need to be more software aware; you need [an] AI and ML analytics tool to keep up with this speed of change and complexity. These tools are going to help to close the skill gap.

But it is going to also require, I think, some investment in the people.

Organizationally, what we will find over time is that we need fewer internal resources to deal with the lower-level control systems, because they're going to be remotely managed -- whether it is through remotely managed on-prem services, like through [HPE] GreenLake -- or whether they're managed as a cloud service, or [as] an extended cloud service, whereas Azure Arc extends [onto] the on-prem [architecture].

There's going to be a lot of different choices and scenarios where IT ops teams can … invest in their people and where they want to shift some of the operational burden: a third party, someone managing a dedicated resource or someone managing a public cloud resource.

Dig Deeper on Containers and virtualization

Software Quality
App Architecture
Cloud Computing
SearchAWS
TheServerSide.com
Data Center
Close