Looking for something else?
The arrival and evolution of event-driven computing change the administrator's role in application scalability. So how does it compare to the well-understood process of scaling virtual instances?
By submitting your personal information, you agree that TechTarget and its partners may contact you regarding relevant content, products and special offers.
Event-driven computing minimizes traditional IT worries about capacity, deployment, patching, scaling, resilience, metrics and logging. It requires the IT team to compose a cloud providers' array of discrete services into the appropriate architecture for powerful enterprise applications, with little if any regard for the underlying computing resources.
Event-driven never truly means serverless
The terms event-driven and serverless are often used interchangeably in the context of IT infrastructure, but serverless computing is something of a misnomer.
Event-driven computing works because application code is placed into the cloud service provider's environment, but that code is only loaded and executed when triggered by an event. In actual practice, that application code is indeed loaded into, and executed within, a server or server cluster, depending on scalability needs. Once the task completes, the code is unloaded -- freeing the server resources for other users' event responses.
Cloud providers' event-driven computing services basically abstract the required servers to allow application administrators to focus on the service rather than underlying instances. Users of event-driven computing services don't need to think about provisioning and managing servers or application scalability in familiar, traditional ways to cloud users.
Who's serving up event-driven compute?
AWS leads in event-driven computing with its Lambda service, though other services are quickly appearing, including IBM OpenWhisk and Google Cloud Functions. In this pay-by-use model, pricing is based on events. For example, AWS charges $0.20 per million events.
Most IT shops think of cloud computing as infrastructure as a service (IaaS). For example, an Amazon Web Services (AWS) user runs a common application in an AWS server, such as a general-purpose t2.large instance with two virtual CPUs, 8 gigabytes of RAM and access to Elastic Block Store instances. To ensure smooth application operation, the administrator must provision resources that will accommodate the application, understanding the instance's characteristics. In addition, the business pays for that AWS instance every month, regardless of how much work the application actually does.
By comparison, the same application's code segments are loaded into an event-driven computing service that triggers the code based on selected events. The code isn't operational and does not consume any computing resources when it's idle. When a trigger event takes place, the code loads into the back-end IT resources and executes, on resources that the cloud provider provisioned and configured to it. This is where the serverless name originates: The administrator doesn't interact with a server instance. The business only pays for the computing that is actually performed each time an event takes place.
Two ways to scale an application
Application scalability is a core element of public cloud computing, enabling operators to add or remove compute instances as processing demands change over time. Cloud's elasticity helps to preserve adequate application performance while minimizing costs, but it's difficult to handle manually. Cloud providers offer auto scaling as a service, to automate the process.
For example, AWS Auto Scaling uses alarms and policies to configure scaling behavior. Alarms are set to desired metrics, such as the CPU utilization for compute instances in the Auto Scaling group, compared against thresholds. If the metric exceeds a maximum threshold for some period, new instances launch to support demand. It also works in reverse. The thresholds and responses form the policies that govern the application's scalability.
Services like AWS Auto Scaling typically work independently of event-driven computing services. Event-driven behaviors are scaled in the background of the application's administration. Since users don't directly control the computing instances consumed by event-driven code as they do with IaaS, there is no need to worry about scaling event-driven processing -- that's the cloud provider's problem. However, organizations that deploy and manage applications on discrete compute instances, such as Amazon Elastic Compute Cloud instances, usually invoke some form of scaling as an answer to application availability and performance.
Still, cloud scaling and event-driven computing are not mutually exclusive -- both services can potentially coexist. For example, an application running in a VM instance can use application programming interfaces or monitoring alerts to trigger events used to handle other, non-scaling-related tasks.
Legacy apps can keep going beside new apps
Cloud providers ramp up serverless offerings
Is NoOps in your future?