Red Hat’s upcoming messaging product release, Red Hat JBoss A-MQ 7, redefines what enterprise developers should look for in a high-performance messaging platform. It has an updated broker that’s built on a modern, asynchronous architecture for improved performance and scalability, and a new router component that can be networked to create a messaging backbone spanning datacenters, cloud and on-premises deployments.
“Supporting standards-based protocols means that we can support a broad set of client libraries with A-MQ,” said David Ingham, director, Software Engineering at Red Hat. “Our open source implementation combined with open standard protocols provides the ultimate assurance that any application built with our product has even more options over time.”
Build a Network of Messaging Routers
One of A-MQ 7’s most innovative enhancements is the Interconnect message router. A-MQ Interconnect is an intermediary between clients, like a broker, however it routes messages between producers and consumers instead of holding messages in a queue.
“You can have one consumer or many consumers listening on a particular address. If you have many consumers, you can change the properties of the address to be anycast or multicast, to deliver messages to 1 or all of the consumers respectively,” said Ingham.
The router’s load sharing capabilities allow multiple services to connect to the router, listen on the same unicast address and share the request volume. The router automatically delivers a request to the least loaded service. The routers can also be networked, so routers running in different data centers provide a message backbone that spans physical network boundaries.
“It operates a lot like IP where you have redundant paths through a network,” said Ingham. “For any given message, the network will determine the optimum path through the network for the fastest delivery, but if the optimum path becomes unavailable, then the traffic will be routed via an alternate path. You get resiliency and fault tolerance by having redundant paths in the network.”
If an application service is deployed in multiple locations, then Interconnect will route requests to the local instance. If that becomes unavailable or backed up, then requests will be routed to other instances across the network. This technique can be used for cloud bursting where spikes in load can be pushed to the services running in a public cloud, transparently.
It’s possible to create an Interconnect backbone spanning AWS and Microsoft Azure with instances of application services running on each. If everything is running normally, the load might be balanced evenly between the service instances in AWS and Azure. If one public cloud become unavailable, the load would be routed transparently to the other.
One Component, Many Uses
A-MQ Interconnect has several different uses, some of which are already in production. In addition to using the A-MQ Interconnect router for high-performance direct messaging, an alternative payment solution provider is exploring its use as a messaging backbone for service interactions across data centers. In Europe, a financial services company is planning to use A-MQ Interconnect to enable secure connectivity in its DMZ. Interconnect can also be combined with the broker to provide transparent queue sharding across multiple broker instances to accommodate high load.
Red Hat provides common tooling for monitoring and managing the brokers and routers.
Messaging as a Service Coming Soon
Red Hat plans to unveil “Messaging as a Service” in the second half of 2017. It will combine the core capabilities of the A-MQ 7 broker and router, combined with other components, to create an elastic-scale messaging service that runs on the Red Hat OpenShift container platform. Developers can set up messaging for their applications without worrying about configuring or acquiring brokers or routers. Messaging as a Service will be capable of running in data centers, public clouds, or OpenShift Online.
Learn more at www.redhat.com.