In the world of software, things are getting smaller all the time: smaller teams, smaller bits of code, smaller releases, smaller places for code to live and execute (containers). The point of getting smaller is to allow your organization to think bigger by getting the most advantage out of cloud resources and bringing more value to your customers and users faster. Microservices are the latest iteration of this movement away from large monolithic applications, which don’t fare well in the cloud. 

RELATED CONTENT: With microservices, more isn’t always better

The idea behind microservices makes a lot of sense when running applications in the cloud. By breaking applications into smaller and smaller pieces, you can support agility, on-demand scale, and frequent updates. It’s much easier and less risky to change, update or move around little pieces of an application than to shift or change the entire application in bulk. This also means users rarely know when you are making application updates because they are happening in a tiny way, all the time. Disruptions are minimal and errors can be corrected swiftly. You can have many small, independent teams managing pieces of the application, which aligns with highly efficient DevOps methodologies. 

Finally, CIOs know that simply moving a legacy application to the cloud has limited economic benefits. Only by re-architecting applications to take advantage of the distributed, elastic nature of the cloud and the many services it offers for high performance in distinct areas such as database, storage and analytics, can companies really save money. This is all good, right?

Too much of a good thing?
The problem is, microservices can become overwhelming quickly. Suddenly, you’ve got tiny bits of code that relate to just one small piece of functionality supporting a business process. There are times when development teams build too many microservices in an application, when simpler is better.

Orchestrating and managing all the services to work together so that the application will run reliably and securely is challenging. A microservice still has the same infrastructure requirements as a larger application: backup and recovery, monitoring, networking, logging. This is where a newer concept called the service mesh comes into play.

The evolving role of the service mesh
When you make a call to your local city government, there’s an operator who, hopefully, helps you quickly get to the right department to get your question answered. The service mesh operates in a similar manner: this technology sits on the network and handles all the communications between microservices and facilitates access to shared services and tools such as service discovery, failure detection/recovery, load balancing, encryption, logging, monitoring, and authentication. This allows your development teams to focus their time and effort on the services themselves, rather than writing the code or logic to discover all the services and physically network to them. The service mesh handles all the connections.

A service mesh is quickly becoming essential to container management. It can reduce developer effort so they don’t need to worry about all the dependencies and communications between containers. Developers simply reference an intelligent proxy or “sidecar” to link containers (and microservices) to the service mesh.

The most popular and common service mesh today is an open-source technology called Istio, originally developed by Google. Vendors such as Cisco, VMware and others are embedding Istio in their products. Other available open-source service mesh technologies include HashiCorp’s Consul, Linkerd (pronounced linker – dee) and Envoy. Service mesh technology is relatively new, but the tools to manage them are maturing. 

What to consider before deploying a service mesh
A service mesh may not be appropriate if your organization’s technology stack is mostly homogenous, and if you need fine control over how  your services communicate. You may have latency issues related to your microservices needing to communicate through this new infrastructure layer, so if the application has a very low tolerance for latency, using a service mesh could be problematic. One example where latency could be impactful is in the financial services industry, in which transactions need to occur in microseconds; anything adding time could have a negative impact. 

In addition, there is a level of complexity that comes with setting up and managing the service mesh. For example, in Istio, you are required to define sophisticated rules against incoming requests and decide what to do with the requests, as well as manage the telemetry collection and visualization of running the mesh, security of the mesh, and the networking aspects of a running mesh. Organizations must weigh the costs of these responsibilities and decide whether it makes sense. Typically, the more complex an application is and the greater its requirements, for such things as response time and unpredictable scale and workloads, the more likely you are to need a service mesh.

Certainly, adding a service mesh to your infrastructure will add complexity in some ways, yet it will pay off in spades by decreasing management and maintenance needs overall as you transition to a heavy microservices and cloud-native application environment. Done thoughtfully, service mesh technology can enable better speed, performance, flexibility and economics for your applications.