Microservices are a new software development pattern emerging from recent trends in technology and practice. Agile methods, DevOps culture, PaaS, application containers and the widespread adoption (both mentally and technically) of continuous integration and delivery methods are making it possible to build truly modular, large-scale systems.

What do teams that successfully build microservices do differently from those that don’t? It all boils down to how you start, according to Arun Gupta, director of technical marketing and developer advocacy at Red Hat. SD Times interviewed Gupta about how (and why) enterprises should build microservices.

Are microservices just a new term for service-oriented architecture?
Gupta: SOA is commonly defined as application components that communicate to provide services to other components over a network. Microservices are SOA 2.0 (or “SOA for hipsters,” as you quoted in a recent article), because they deliver on the original promise of SOA. Today, multiple microservices can provide a complete application experience to the customer.

The key difference is the lack of an Enterprise Service Bus, which was very vendor-specific and had a lot of built-in logic. Now, microservices logic is built into the endpoint, and different services exchange payload over a dumb HTTP pipe. SOAP and other heavyweight protocols are replaced by lightweight JSON over HTTP or REST. There is no centralized governance and persistence, and each microservice has its own data store. In addition, Continuous Integration/Delivery (CI/CD) are key. Polyglot programming and persistence, a crack DevOps team, containers (immutable VMs would do too) and PaaS also differentiate microservices from SOA.

What are the advantages of microservices?
The “micro” in microservices does not refer to size, it refers to scope, which often also requires less code. Skinny archives lead to faster deployment times, enabling CI/CD. Each service can scale independently on X-axis (horizontal scaling) and Z-axis (sharding or geolocation based). Fault isolation improves: A misbehaving service, such as one with a memory leak or unclosed database connections, only affects that service. Microservices eliminate any long-term commitment to a technology stack, giving you freedom of choice. In a nutshell, they are easier to develop, understand and maintain.

Saying they eliminate any commitment to a given technology stack is a bold statement.
To be even bolder, open-source technology is driving the microservices revolution. The problem with SOA was the vendor lock-in with proprietary middleware, too often focused on ESB, SOAP, and centralized governance/persistence. Now, true SOA is finally possible because we have an entire open-source stack for every element of the vision.
Well-defined interfaces (typically using REST, CI/CD, DevOps, PaaS or containers) are critical elements of microservices.

 

But what about what Martin Fowler calls the “microservices premium”?
As Fowler puts it, “Don’t even consider microservices unless you have a system that’s too complex to manage as a monolith.” If you can’t build a well-structured monolith, what makes you think microservices are the answer? If your monolith is a big ball of mud, your microservice will be a bag of dirt. But if you can attenuate the complexity by decreasing the coupling in your system with well-defined microservices, you will increase your team’s productivity—eventually.

Microservices are definitely not a free lunch. Each service is fully independent, but there are operational requirements, which are called NoOps or Outer Architecture.

Name some key NoOps components.
DevOps, PaaS, containers or immutable VMs, service replication/registry/discovery, and proactive monitoring and alerts are some of the components required for a microservice architecture to succeed. This could be a significant investment without an immediate return, so it doesn’t make sense for every team and project.

That’s why it’s critical to take a monolith-first approach. This ensures that you’ve built the application following good design principles. Over a period of time, you’ll feel the need to scale the application, and refactoring is much easier.

How do you define a “good” monolith application?
Take a Java EE monolithic application, typically defined as a WAR or an EAR archive. The app and its functions are in one package. For example, an online shopping cart may have User, Catalog and Order functionalities. All Web pages are in the root of the application, all corresponding Java classes are in the WEB-INF/classes directory, and all resources are in the WEB-INF/classes/META-INF directory.

Good software architecture principles mean that even a trivial shopping cart app should have:
• Separation of concerns
• High cohesion and low coupling using well-defined APIs
• No redundancy
• Separate interfaces, APIs and implementations, following the Law of Demeter
• Domain-driven design to keep related objects together
• Nothing extra

Assuming you can refactor this monolith into microservices, how do you manage a swarm of tiny apps?
Aside from the human questions, which I’ll get to, there are a bunch of things to be aware of. Once you have the tools in place, you are well on the road toward a highly scalable architecture.

ZooKeeper, Etcd, Consult and Snoop are options for service registry and discovery.

Applications containerized using Docker, Google’s Kubernetes or Docker Swarm can replicate containers at scale. Docker Machine and Compose can be used to create multi-container applications. Jenkins is the most commonly used option for creating your CI environment. Consider Shippable if you are using Docker and Kubernetes. For dependency resolution, Nexus is highly recommended. For failover and resiliency, there are libraries like Hystrix and Ribbon from Netflix OSS that help you implement the circuit-breaker pattern in your applications. And for service monitoring, alerts and events, there’s ELK, New Relic and other similar tools.

But remember, many of the Docker solutions for managing containers are still in beta and not yet recommended for production.

Have you been surprised by the flood of interest in Docker and containerization?
In software, there are always waves of new technology every few years. Jenkins was a tipping point, commoditizing CI/CD. Amazon commoditized the cloud. Lots of people have been using containers for a while in the form of immutable VMs. Now Docker is commoditizing Linux containers, which is quite an old technology. But rather than talk about containers, I think it’s more useful to focus on microservices.

How do you decompose a monolith into microservices, then?
Identify the business boundaries in your application and start decomposing each into its own microservice, following the Single Responsibility Principle: Do one thing only, and do it well.

Once you define basic microservices, you may compose them using aggregator, proxy, chained, branch and other design patterns.

Why should microservices teams care about Conway’s Law?
Melvin Conway’s observation in 1968 was that organizations design systems “whose structure is a copy of the organization’s communication structure.” One of the biggest challenges for microservices is that it requires re-aligning your teams.

Amazon popularized the “two-pizza team”: eight to 10 people in each microservice team, organized around functionality, not architecture. The point of Conway’s Law is just to make you aware that how you communicate will likely be replicated in your apps, so if it’s dysfunctional, your apps will be too. That’s the advantage of having a DevOps practice in place before you do microservices: You have the communication figured out.

If you don’t have a working DevOps practice, don’t start with a microservice approach, because the ROI is not visible at first. Build the monolith, then refactor it into microservices. Well-designed software is one part of the solution, and the team is the other part.

Speaking of data, how do you integrate data with stateless microservices?
That’s a good question, and one that I’m actively researching. In microservices, there is no shared data; each has its own data store. This is a big difference from SOA. The question is, is it really a separate data store, or just a schema? Having multiple copies of an enterprise database could become a licensing nightmare. Microservices enable polyglot persistence, so you can pick the right data store based on your service, as opposed to jamming data into your “corporate data store.” However, data stores may need to be aligned for consistency.

That’s where generic ETL tools or Red Hat JBoss Data Virtualization could really help. Event sourcing is a well-known design pattern that helps to align data stores to cope with retroactive changes.

In talking to customers and developers, data normalization is indeed one of the biggest pain points for microservices. But if you’ve designed your monolith well, this can be ameliorated.

Among the enterprises you see launching microservices, what are the keys to success?
A crack CI/CD infrastructure is a must. This also starts you down the path to a DevOps culture. Some teams have taken a two-phased approach of refactoring a monolith into a well-designed monolith, and then refactoring it into multiple microservices. Typically, the teams start by carving out a single reusable piece into a microservice, then phasing in the rest of the application. The microservices journey could take anywhere from six months to two years—your mileage may vary.

How is Red Hat positioned to deliver on the microservices stack?
Microservices are heavily driven by open source. That puts a company like Red Hat, which creates, maintains and services open-source solutions, in a unique position to deliver on the promise. Some of the key pieces we offer are:
• A lightweight operating system focussed on running containers effectively, such as Red Hat Enterprise Linux Atomic Host.
• Built-in container orchestration. Red Hat is the only vendor that provides a seamlessly integrated experience of running containers in a PaaS (OpenShift by Red Hat) and OS with native Kubernetes support.
• Influencing the direction of containers. Red Hat is the second biggest contributor to Docker (after Docker) and Kubernetes (after Google). We not only create top class open-source software, but partner where necessary.
• A rich middleware stack (xPaaS) with a full range of technology from Java EE, including integration, business process automation, business rules and data integration. This is complemented by community projects such as WildFly Swarm and Vert.x that allow it to deploy isolated applications very easily.
• Standard hosting infrastructure, such as OpenStack.
• An extensive partner ecosystem, eager to learn and implement the successful architectures.

This article is sponsored by Red Hat.