In today’s fast-moving world, DevOps teams are struggling to solve the same problem: What is the best way to build, deploy, and maintain applications in a cloud-native world? From this problem has spawned a heated debate between the serverless and container communities. While I usually am a firm believer that the answer is somewhere in the middle, I have seen this play out before and I know how it ends. Spoiler alert, serverless will fade into oblivion, just like its predecessors.

Many services such as Heroku and Google App Engine have been trying to abstract away the dreaded server for a long time. While more configurable and flexible then it’s predecessors, serverless platforms continue to suffer from many of the same problems. Scaling a black box environment such as AWS Lambda or Google Cloud Functions can be a serious challenge, often resulting in more work than it is worth.

So, what exactly is serverless? Serverless is a cloud-native framework that provides its users with a way to execute code in response to any number of available events, without requiring the user to spin up a traditional server or use a container orchestration engine. Cloud providers such as AWS offer fairly mature toolchains for deploying and triggering lambda methods. Using these tools, a developer can essentially duct-tape together a set of services that will emulate all of the functionality that would normally be available in a server or container model. There are numerous challenges in scaling this approach, some of which I have listed below.

Complexity
As with most things in the development world, abstraction leads to complexity. Serverless is no different. Take a simple Python-based RESTful web service for example. To deploy this service to AWS lambda first you must upload your code to an s3 bucket, build IAM roles and permission to allow access to that S3 bucket in a secure fashion, create an API gateway, tie the API gateway to your lambda method using a SWAGGER API model, and finally, associate proper IAM roles to your lambda method. Anyone of the above stages comes with a staggering number of configuration options. Your now simple rest service has been broken up into numerous complex and infinitely configurable components. Sounds like fun to maintain and secure.

Scalability
Scaling a black box is always a challenge. Many have attempted to provide a single turnkey way to deploy and scale applications on top of a black box. The challenge, however, is that as an application grows in complexity, it will eventually begin to hit bottlenecks at scale. To solve the scaling issues, developers often need to dive deep into the internals of the environment, so they can understand why there is a bottleneck. Unfortunately, cloud-native serverless frameworks provide no great way to understand what is going on under the hood. This lack of visibility can lead a developer down a long and winding path trying to guess why the application isn’t performing as expected.

Overrun by functions
Serverless is built to allow users to easily deploy single functions to be triggered by specific events. While this can be useful for applications that only handle one method, like your standard ‘Hello, World’ app, it is not very useful for real-world applications with hundreds or thousands of endpoints. This highly fragmented approach makes it very challenging to track, deploy, maintain, and debug your highly distributed application. Congratulations, you now have 200 very tiny applications to maintain. Have fun with that.

Vendor lock-in
For enterprises of today, agility is paramount. Leveraging multiple providers gives the enterprise the ultimate in flexibility and access to best in class services. Furthermore, by being multi-cloud enterprises control their own destiny when it comes to price negotiation. By building your application on top of serverless technology, your code must directly integrate with the serverless platform. As your application — or should I say loose grouping of methods — grows, it becomes harder and harder to maintain a provider-agnostic code base.

Testing
Testing your code inside of a serverless framework is incredibly time-consuming. While there are some local tools that can help emulate a serverless environment, none are perfect. As a result, the only true way to test is to upload your code and run it inside the serverless framework. This can lead to hours of additional testing and debugging. For example, it can take up to two minutes to upload your code changes to lambda. So, until there is an IDE that can detect and solve logic mistakes, you’re probably in for a long night (or weekend).

In conclusion, serverless frameworks continue to solve the ever-elusive goal of allowing engineers to build applications, without having to worry about any type of pesky computing components. Serverless is a wonderful option for anyone who enjoys slamming their head into a keyboard, slowly, over many hours while testing their 200 individually packaged methods. After your testing is complete you get to watch as your application “grows up” only to hit complexity and scale issues, leaving you out of duct tape and patience. While this sounds like fun, I am going to stick with my predictable and performant container that can run anywhere, including on my local system.