The way software is built is constantly changing to meet the ongoing pressure of getting to the market faster and keeping up with the competition. The software development industry has gone from waterfall to Agile, from agile to DevOps, from DevOps to DevSecOps, and from monolithic applications to microservices and containers. Today, a new approach is entering the arena and shifting the paradigm yet again. Serverless aims to capitalize on the need for velocity by taking the operational work out.

“Serverless has changed the game on the go-to-market aspect and has compressed out a lot of the steps that people never wanted to do in the first place and now don’t really have to do,” Tim Wagner, general manager for AWS Lambda and Amazon API Gateway, said in an interview with SD Times.

Amazon describes serverless as a way to “build and run applications and services without thinking about servers. Serverless applications don’t require you to provision, scale and manage any servers. You can build [serverless solutions] for nearly any type of application or back-end service, and everything required to run and scale your application with high availability is handled for you,” the company wrote on its website.

The Cloud Native Computing Foundation (CNCF) and its Serverless Working Group define serverless as “the concept of building and running applications that do not require server management. It describes a finer-grained deployment model where applications, bundled as one or more functions, are uploaded to a platform and then executed, scaled and billed in response to the exact demand needed at the moment.”

Despite its name, the CNCF stated that serverless doesn’t mean developers no longer need servers to host and run code, and it also doesn’t mean that operation teams are no longer necessary. “Rather, it refers to the idea that consumers of serverless computing no longer need to spend time and resources on server provisioning, maintenance, updates, scaling, and capacity planning. Instead, all of these tasks and capabilities are handled by a serverless platform and are completely abstracted away from the developers and IT/operations teams,” the CNCF wrote. This enables teams to worry about their code and applications business logic and operation engineers to focus more on critical business tasks.

Wagner explained this is a major benefit of serverless because most companies aren’t in the business of managing or provisioning servers. By being able to abstract the operational tasks, capacity planning, security patching and monitoring away, businesses can focus on providing value that matters to the customers. However, Wagner said that while serverless certainly eases up operational tasks, it doesn’t take operational teams out of the equation entirely. Applications and application logic still require monitoring and observability. “The serverless fleet portion goes away and that is the part that frankly was never a joy for the operation team or DevOps team to deal with. Now they get to focus their activities on business logic, the piece that actually matters to the company,” he said.

How to successfully transition to serverless
One of the first things you hear about when it comes to serverless is the cost savings. Serverless provides reduced operational costs and reduced development and scaling costs because you can outsource work and only pay for the compute you need.

“It allows applications to be built with much lower cost and because of that, enterprises are able to make and spend more time getting the applications they want. They can devote more time to the business value and the user experience than they were traditionally able to in the past,” said Mike Salinger, senior director of engineering for the application development software company Progress.

However, Nate Taggart, CEO of Stackery, the serverless solution provider for teams, said the cost-saving benefits are a bit of a red herring. The main benefit of serverless is velocity.

“Every engineering team in the world is looking for ways to increase the speed in which they can create and release business value,” Taggart said.

Velocity is a major benefit of serverless, but achieving speed becomes difficult when you have multiple functions and try to transition a large monolithic, legacy application to serverless. Serverless, for the most part, has a low barrier for entry. It is really easy for a single developer to get one function up and running, according to Taggart, but it becomes more difficult when you try to use serverless as part of a team or professional setting.

To successfully deploy serverless across an application, Taggart explained teams need to utilize the microservices pattern. Microservices is an ongoing trend organizations have been leveraging to take their giant monolithic apps and break them out into different services. “You can’t just take an entire monolithic application and lift and shift to serverless. It is not interchangeable. If you have a big monolithic application chances are you are using VMs and containers, so transitioning to serverless becomes a lot tricker. We see microservices as one of the stepping stones into serverless.” he said.

When transitioning a monolithic application to serverless, Amazon’s Wagner suggested doing it in pieces. An entire application doesn’t have to move to serverless. Take the pieces that would benefit from serverless the most and transition those bits to optimize on cost and business results, he explained. According to Wagner, most enterprises already have systems that are hybrid at some level, so instead of having to decide between serverless, containers and microservices, you can combine the compute paradigms to your benefit.

In addition, professional engineering teams moving to serverless need to provide a consistent and reliable environment. In order to do that, Taggart said organizations need to put company-wide standards in place.

“As an organization, you want to ensure that whoever modifies or ships the application does so in a way that is universal so that you can increase reliability and avoid the ‘it worked on my laptop’ problem. When an individual developer is shipping a serverless application, there’s a sort of default consistency,” he said. “When teams are working on serverless applications, and you have more than one developer involved, consistency and standardization become extremely important.”

At a basic level, consistency and reliability are achieved by having a centralized build process, standard instrumentation, a universal method for rolling back apps, and visibility into the architecture and shared dependencies. More advanced methods include having centrally managed security keys, access roles and policies, and deployment environments, Taggart explained.

Amazon’s Wagner added that it is very important to limit the people who can call functions, and limit the rights and access capabilities to ensure the security of applications.

According to Progress’ Salinger, a best practice for transitioning applications to serverless is working in a way where your application is stateless. “Stateless applications are done in such a way that your components can be scaled up and down at any time. You have to make sure your application isn’t relying on a specific state to occur,” he said.

Another design principle is to develop your business logic and user experience first. A common pitfall is that developers think about building a serverless application instead of thinking about building your app and running a function in a way that it will scale out easily, Salinger noted.

“It is all about focusing on the user experience and the value of the application, and not having to worry about all the side stuff that is repeatable and less valuable for the developer and for their app,” Salinger said.

Solving for serverless security
Serverless is still an “immature” technology, which means that serverless security is even more immature, according to Guy Podjarny, CEO of the open-source security company Snyk.

“The platforms themselves, such as Lambda and Azure Functions, are very secure, but both the tooling and best practices for securing the serverless applications themselves are lacking and poorly adopted,” Podjarny said.

While serverless doesn’t radically change security, some things become inherently difficult, according to Hillel Solow, CTO of the serverless solution provider Protego Labs. The top weaknesses of serverless include unnecessary permissions, vulnerable code, and wrong configurations, according to Solow.

In addition, Red Hat’s senior director of product management Rich Sharples said old application security risks become new again with serverless. Those risks include function event data injection, broken authentication, insecure serverless deployment configuration and inadequate function monitoring and logging.

Serverless security isn’t all complicated though, Solow explained. For instance, severless requires teams to turn over ownership of the platform, operating system and runtime to the cloud provider such as Amazon, Microsoft and Google. “The cloud providers are almost always going to do a better job at patching and securing the service, so you don’t have to worry about your team dealing with the things,” he said.

The challenges arise when teams start thinking about how they are going to make sure their application does only what it is supposed to do. Solow explained where you put security and how you put security in place has to change.

In a recent report from Protego Labs, the company found 98 percent of serverless functions are at risk and 16 percent are considered serious. “When we analyze functions, we assign a risk score to each function. This is based on the posture weaknesses discovered, and factors in not only the nature of the weakness, but also the context within which it occurs,” explained Solow. “After scanning tens of thousands of functions in live applications, we found that most serverless applications are simply not being deployed as securely as they need to be to minimize risks.”

According to Podjarny, serverless shuffles security priorities and splits applications into many tiny pieces. “Threats such as unpatched servers and denial of service attacks are practically eliminated as they move to the platform, greatly improving the security posture out of the gate. This reality shifts attacker attention from the servers to the application, and so all aspects of application security increase in importance,” he said. “Each piece creates an attack surface that needs securing, creating a hundred times more opportunities for a weak link in the chain. Furthermore, now that the app is so fragmented, it’s hard to follow app-wide activities as they bounce from function to function, opening an opportunity for security gaps in the cross-function interaction.”

Red Hat’s Sharples added that security teams should think about data in a serverless environment, think about least-privilege controls and fine-grained authorization, practice good software hygiene, and remember data access is still their responsibility

To successfully address the serverless security pains, Podjarny suggested good application security practices should be owned and operated by the development team and should be accompanied by heavy automation. In addition, Protego Labs’ Solow suggested embracing a more serverless model for security, which uses security at the places where your resources are.

“The good news is these are all mitigable issues,” said Solow. “Serverless applications enable you to configure security permissions on individual functions. This allows you to achieve more granular control than with traditional applications, significantly mitigating the risk if an attacker is able to get access. Serverless applications require far more policy decisions to be made optimally, which can be challenging without the right tools, but if done accurately, these decisions can make serverless applications far more secure than their non-serverless analogs.”

Other security best practices Solow suggest include:

  1. Mapping your app to see the complete picture and understand the potential risks
  2. Applying perimeter security at the function level
  3. Crafting minimal roles for each function
  4. Securing application dependencies
  5. Staying vigilant against bad code by applying code reviews and monitoring code and configuration
  6. Adding tests for service configuration to CI/CD
  7. Observing the flow of information to ensure it is going to the correct places
  8. Mitigating for Denial-of-Service and Denial-of-Wallet where hackers can attack your app by “overwhelming” it, causing it to rack up expenses And considering strategies that limit the lifetime of a function instance

The top use cases for serverless
According to the CNCF, there are 10 top use cases for serverless technology

  1. Multimedia processing: The implementation of functions that execute a transformational process in response to a file upload
  2. Database changes or change data capture: auditing or ensuring changes meet quality standards
  3. IoT sensor input messages: The ability to respond to messages and scale in response
  4. Stream processing at scale: processing data within a potentially infinite stream of messages
  5. Chat bots: scaling automatically for peak demands
  6. Batch jobs / scheduled tasks: Jobs that require intense parallel computation, IO or network access
  7. HTTP REST APIs and web apps: traditional request and response workloads
  8. Mobile backends: ability to build on the REST API backend workload above the BaaS APIs
  9. Business logic: The orchestration of microservice workloads that execute a series of steps
  10. Continuous integration pipeline: The ability to remove the need for pre-provisioned hosts

The three revolutions of serverless
According to a recent report from cloud computing company DigitalOcean, while serverless is gaining traction, a majority of developers still don’t have a clear understanding of what it is. Hillel Solow, CTO of the serverless solution provider Protego Labs, explained that the meaning of serverless can be confusing because it has three different core values: serverless infrastructure, serverless architecture and serverless operations.

Serverless infrastructure refers to how businesses consume and pay for cloud resources, Solow explained. “What are you renting from your cloud provider? This is about ‘scales to zero,’ ‘don’t pay for idle,’ ‘true auto-scaling,’ etc. The serverless infrastructure revolution proposes to stop leasing machines, and start paying for the actual consumption of resources,” he wrote in a post.

Serverless architecture looks at “how software is architected to enable horizontal scaling.” As part of this, Solow says there are key design principles:

  • Setting up serverless storage as file or data storage so that it can scale based on the application’s needs
  • Moving all application state to a small number of serverless storages and databases
  • Making sure compute is event-driven by external events like user input and API calls or internal events like time-based events or storage triggers
  • Organizing compute into stateless microservices that are responsible for different parts of the application logic

Serverless operations defines how you deploy and operate software. According to Solow, operations specifically looks at how cloud-native apps are orchestrated, deployed and monitored. “Cloud native means the cloud platform is the new operating system,” he said. “You are writing your application to run on this machine called AWS. Just as most developers don’t give much thought to the exact underlying processor architecture, and how many hyper-threaded cores they run on, when you go cloud native, you really want to stop thinking about the machines and you want to start thinking about the services. That’s how you write software for Android or Windows, and that’s how you should be writing software for the cloud.”

In addition, serverless is often referred to as Functions-as-a-Service or FaaS because it is an easier way to think about it, according to Red Hat’s senior director of product management Rich Sharples. FaaS is actually a subset of the broader term serverless, but it is an important part because it is “the glue that wires all these services together,” he explained.

“FaaS is a programming model that really speaks to having small granularity of deployable units, and the ability that comes from being able to separate and segregate that out as well as separate it from some of the operational pieces,” said Tim Wagner, general manager for AWS Lambda and Amazon API Gateway. “When I think of serverless, I usually mean a functions model, which is operated by a public cloud vendor, and offers the perception of unbounded amounts of scale and automated management.”

Serverless tools and frameworks
Apache OpenWhisk: Apache OpenWhisk is a open-source, serverless cloud platform designed to execute functions in response to events. It is currently undergoing incubation at the Apache Software Foundation.

AWS Lambda: AWS Lambda is perhaps one of the earliest and most popular serverless computing platforms on the market. Features include the ability to extend other AWS services with custom logic, ability to build custom back-end services, and the ability to use any third-party library. In addition, Amazon explained developers can run code for any type of app or backend service with zero administration.

Azure Functions: Developed by Microsoft, Azure Functions aims to provide developers with an event-driven, serverless compute experience. It features the ability to manage apps instead of infrastructure, is optimized for business logic, and enables developers to create functions in the programming language of their choice.

CloudEvents: CloudEvents is an ongoing effort to develop a specification for describing event data in a common way. “The lack of a common way of describing events means developers must constantly re-learn how to receive events. This also limits the potential for libraries, tooling and infrastructure to aide the delivery of event data across environments, like SDKs, event routers or tracing systems. The portability and productivity we can achieve from event data is hindered overall,” according to the website. The end goal is to eventually offer the specification to the Cloud Native Computing Foundation.

Cloud Functions: Cloud Foundations is an event-driven serverless computing solution from Google Cloud. Key features include no server management, ability to scale automatically, run code in response to events, and connect and extend cloud services.

Fission: Fission is a open-source Functions-as-a-Service serverless framework for Kubernetes designed by Platform 9, a hybrid cloud and container orchestration provider. Fission was built as an alternative to AWS Lambda. According to the company, Lambda causes problems for developers when it comes to the size of their deployment package, amount of memory, and number of concurrent function executions. Fission is designed to free teams from a cloud vendor lock in. With the use of Kubernetes, Fission can run anywhere where Kubernetes runs and removes some of the “software plumbing” the use of containers create. With Fission, developers don’t have to worry about building containers or managing Docker registries.

IBM Cloud Functions: IBM offers a polyglot Function-as-a-Service programming platform based on Apache OpenWhisk. It is designed to execute code on demand in a scalable serverless environment. Features include access to the OpenWhisk ecosystem, ability to accelerate application development, cognitive services, and pay for use.

Kinvey: Progress Kinvey is a serverless cloud platform for building apps for mobile, web and other digital channels. The platform enables developers to build apps without thinking about services so they can focus on the value of their app and not have to worry about the infrastructure, backend code, and scaling.