If you haven’t heard of polyglot persistence before, simply put, it means using different database technologies to handle specific needs. The term was derived from something else you may have heard: polyglot programming, which expresses the idea that applications should be written in a mix of languages to take advantage of the fact that different languages are suitable for tackling different problems. Polyglot persistence applies this same concept to databases, where you have an application that talks to different databases using each for a specific purpose—whether that is a NoSQL key-value store, document database or a traditional relational database.

Polyglot persistence means freedom for developers who should not be forced to use a single corporate-approved database for all their data-management tasks. More than one database type makes sense today and is being made even easier in this era of cloud computing.

(Related: How databases are changing the food industry)

The notion of having one or a very select number of corporate-approved databases is outdated, and that mindset needs to change. It’s inefficient and it’s probably costing your enterprise more than it should. There are great options available today that include both SQL and NoSQL data stores, including those that are optimized for both operational and analytic workloads, as well as both open-source databases and commercial database products.

There is some tradition of this concept in analytics, where data warehouses have long been used and where Hadoop is now making inroads, but this is a fairly narrow use case. Polyglot persistence is used much more broadly by the cool kids that work at Silicon Valley startups and at big Internet companies, who are always looking for better tools to do their jobs. Not so much in enterprise computing, however, but I believe that now the time has come for that to change.

To get started, let’s look at why using different databases makes sense by looking at some use-case scenarios for specific databases and examine the benefits.

  • For querying time series data, like a stock market ticker or real-time events from an Internet of Things (IoT) stream, a vector database like InfluxDB could be many times faster than the typical relational database.
  • For peer-to-peer messaging, a distributed key-value store can be extremely fast and use dramatically fewer resources. That is why Apple chose the NoSQL database Riak as the backing store for its iMessage app.
  • To store social networking data, a graph database like Neo4j allows the program to traverse the networks of information much more easily and efficiently.

Beyond the benefits of choosing the right database technology for a given application’s data-management challenge, there may be economic reasons to choose a particular database platform. For example, in many cases, a freely available open-source database may be perfectly suited to the task at hand. This means your organization can avoid spending money on a commercial database when one isn’t required. Of course, at the time the application moves into production, you may choose to move to a commercial solution—but again, having that choice creates greater flexibility.

There is another side to this argument, of course; the most common objection being that the soft cost of people needed to operate multiple databases can be quite substantial. Let’s face it: DBAs don’t come cheap—nor should they, given their expertise. Each database technology also comes with its own set of tooling and often requires specialized expertise to administer. Staffing a team of DBAs for a dozen different database technologies would be prohibitively expensive.

About Ken Rugg

Ken Rugg is a founder, CEO and board member of Tesora. Ken has spent most of his career around databases in technical, strategic and business generating roles. Tesora is the leading contributor to the OpenStack Trove project.