While most applications should be portable to event-driven architectures, certain process demands as well as limitations from event-driven infrastructure providers make the idea of 100% serverless computing more pie in the sky than app in the cloud.
By design, event-driven computing, also called runtime as a service and serverless computing, responds to events -- for example, a user making a request to view a page, a door opening or a scheduled occurrence of "every morning at 5." Event-driven applications have better resource utilization than traditional applications; there are no underutilized assets on the books since resources only run when needed.
Existing applications require essentially an entire rewrite to become event-driven applications, including a rethink on some methods. Most applications, however, should be portable to a runtime as a service architecture. This includes any app that relies on events or triggers, such as web-based applications, mobile applications and even back-end tasks that are triggered from a user or from an automated action. The developers must reshape the application's architecture from a "wait for this request" type of application to a "when this happens, do this" type.
For applications that require long-running processes, such as streaming audio or video, this event-activated infrastructure consumption model won't work.
Complex services that require a large amount of compute or RAM also won't work as event-driven applications today, due to the cloud providers' limitations.
There are a few limits on execution when using pay-as-you-go cloud runtime services:
• Number of concurrent executions;
• CPU and/or RAM available;
• Total execution time of one invocation;
• Programming language; and
• Execution environment, meaning third-party libraries.
These limits are in place to prevent abuse or unexpected billing issues.
Due to the nature of the event-driven architecture, functions must have a very short lifespan. Amazon Web Services (AWS) Lambda allows for up to five minutes of execution time, while the alpha version of Google Cloud Functions and preview version of Microsoft Azure Functions do not have this limit. Functions are designed to start quickly, run some code and return quickly. An example use is when a page loads upon a user request: Users aren't going to wait around for a minute for a function to initialize before getting the page to load. Everything has to be nearly instantaneous.
AWS Lambda functions also have limits on the amount of RAM and CPU used during an execution. This can be provisioned up to a certain point, but there are still many applications that require execution for longer times or use more power to execute than the Lambda function can accommodate. Google Cloud Functions and Microsoft Azure Functions have similar upper thresholds, although they are larger, allowing for slightly more complex tasks.
Event-driven resource limits exist at this level today; they'll very likely be raised in the future. Amazon has already increased Lambda's total allowed execution time of a single function, and Google Cloud Functions removed that restriction entirely. However, it's more of a fundamental difference with how the architecture is designed for event-driven applications than one of simply increasing resource limits. A developer can design a function that takes five minutes to execute and then starts another Lambda function to run after it's completed, essentially creating a long-running Lambda task. However, that's not really the purpose event-driven applications serve.
Docker's functions, however, offer the most promise for this field. When runtime as a service providers start to allow functions that are entirely based on virtualized containers such as Docker, developers can completely control their environments and migrate virtually any applications to this new wave of hosting.
App scalability in event-driven vs. IaaS computing
Implementing event-driven computing
The IT ops role in a serverless deployment