Serverless computing is a next-generation technology that enables agility, elasticity and cost-effectiveness when applied to appropriate use cases. It is redefining the way enterprises build, consume and integrate cloud-native applications.

However, the term “serverless computing” is a misnomer: The technology eliminates the need for infrastructure provisioning and management, but certainly does not eliminate the need for servers. It is not surprising, then, that market confusion still exists on what serverless computing is and the benefits of adopting it within an enterprise. IT leaders, along with infrastructure and operations (I&O) professionals building cloud computing strategies, need a comprehensive understanding of the technology to dispel common myths and consider practical use cases. 

RELATED CONTENT:
Evaluating if serverless is right for you
Increasing serverless capabilities are pushing architecture to market maturity

Here are some of the most frequently asked questions about serverless computing facing IT and I&O leaders, as well as key takeaways for those considering adopting this technology. 

What is serverless computing?
Serverless computing is a new way of building or running applications and services without having to manage the infrastructure itself. Instead, code execution is fully managed by a cloud service provider. This means that developers don’t have to bother with provisioning and maintaining system and application infrastructure when deploying code. Normally, a developer would have to define a whole host of items — like database and storage capacity — prior to deployment, which leads to longer provisioning windows and more operational overhead.  

The most prominent manifestation of serverless computing is function platform as a service, or fPaaS. Gartner predicts that half of global enterprises will have deployed fPaaS by 2025, up from only 20% today.

What is the value of serverless computing?
Serverless computing enables operational simplicity by removing the need for infrastructure setup, configuration, provisioning and management. Serverless computing architectures require less overhead compared to those in which developers target the virtual machines (VMs) or containers directly.

Infrastructure is automated and elastic in serverless computing, which makes it particularly appealing for unpredictable workloads, not to mention more cost-efficient. Most importantly, serverless architectures enable developers to focus on what they should be doing — writing code and optimizing application design — making way for business agility and digital experimentation.

The benefits of serverless computing must be balanced against its drawbacks, including vendor-lock in, inevitable skills gaps and other architectural limitations. 

What are the key capabilities of serverless computing?
At its foundational level, serverless functions eliminate the need for end users to manually manage the infrastructure. In turn, it provides these key capabilities:

  • Runs code residing as functions without the need for the user to explicitly provision or manage infrastructure such as servers, VMs and containers 
  • Automatically provisions and scales the runtime environment, including all the necessary underlying resources (specifically the compute, storage, networking and language execution environment) required to execute many concurrent function instances
  • Offers additional capabilities for test and development environments along with service assurance purposes, such as monitoring, logging, tracing and debugging

How does serverless computing differ from other virtualization technologies?
VMs, containers and serverless functions have a few fundamental differences. Each approach is defined by the architectural layer that it virtualizes and how compute components are scaled in those respective environments.

Hypervisors virtualize the hardware and scale via VMs, while containers virtualize the operating system (OS). Serverless fPaaS virtualizes the runtime and scales via functions, which is why serverless solutions are suitable for projects that have specific characteristics: Runs infrequently; is tied to external events; has highly variable or unknown scaling requirements; has small and short-lived discrete functions; can operate in a stateless manner across invocations; and connects other services together.

Each of these virtualization technologies will be relevant for CIOs in the foreseeable future. Serverless, specifically, is commonly applied in use cases pertaining to cloud operations, microservices implementations and IoT platforms.

How can my organization take advantage of serverless fPaaS?
Being “ready” for serverless fPaaS means considering three aspects of the organization.

The first is application development: Since operations are farther removed from visibility with serverless fPaaS, place developers and operators closer together — even on the same team — so they can share close responsibility for the development and maintenance of a software product throughout its entire life cycle.

The second is security and risk: The biggest change that security and risk management leaders will have to adjust to is that they no longer own or control the OS, hypervisor, container and application runtime. Instead, they can focus on areas they can control, such as integrity of code and access control.

The third is I&O: Serverless technologies do not make other forms of infrastructure (physical machines, containers) obsolete. Most organizations will need a mix of these over time, so it’s critical for I&O leaders to rethink IT operations, from infrastructure management to application governance. The role of I&O teams may be minimized in public cloud fPaaS, but ensure they work closely with developers for successful deployment.

What lessons learned have we seen from early adopters of serverless? 
IT leaders can shorten the learning curve and time to adoption of serverless computing by starting training on the general cloud infrastructure as a service (IaaS)/platform as a service (PaaS) environment and adopting a DevOps culture. Learning the security and technical aspects of serverless deployments is paramount, so build a proof of concept to validate assumptions about the serverless application design, code, scalability, performance and cost of ownership.