SAN FRANCISCO — In-memory databases aren’t new. At OpenWorld, Oracle made them newer.
For years, development teams have been able to use in-memory databases, such as SQLite, McObject’s eXtremeDB, and many others, to boost performance far beyond what can be done with disk-based rows, tables and indices.
When well designed and well implemented, an in-memory database can be several orders of magnitude faster than a disk-based data store. By eliminating the overhead of caching, buffering, and multi-step requests, an in-memory database can be able to do reads and writes in a single direct operation.
Low-overhead persistence mechanisms, such as transaction logs or memory snapshots, can ensure that if there’s a hardware or software failure, the data is as secure as data written to magnetic storage.
Once upon a time, in-memory databases were relatively small: megabytes, and then gigabytes. Because memory was expensive, and many systems had a limit on the amount of installable RAM, in-memory databases were saved for only the most important applications. It was often easier to stick with standard disk-based databases and use main memory to store the index.
That changed. Recently, in-memory databases have scaled to terabyte size, and with the drop in RAM prices, they have become somewhat affordable. Sure, RAM is considerably more expensive than disk. But it’s not as bad as it used to be. And given that in-memory databases can do writes just about as fast as they can do reads, the technology is compelling.
If a technology is large-scale and compelling, it’s attractive to Oracle, which unveiled an in-memory option for Oracle DB12c. According to the company, “The unique approach of Oracle Database In-Memory leverages a new in-memory column store format to speed up analytic, data-warehousing, and reporting workloads, while also accelerating transaction processing (OLTP) workloads.”
Oracle further says:
Real-Time Analytics Performance: The Oracle Database In-Memory option dramatically accelerates the performance of analytic queries by storing data in a highly optimized columnar in-memory format. Analytic operations run in real time and return completely current and consistent data.
Acceleration for All Workloads: A unique “dual-format” approach ensures outstanding performance and complete data consistency for all workloads. Oracle Database In-Memory automatically maintains data in both the existing Oracle row format for OLTP operations, and a new, purely in-memory column format optimized for analytical processing. Both formats are simultaneously active and transactionally consistent. Unlike other in-memory approaches that represent data exclusively in column format, thus delivering poor OLTP performance, Oracle Database In-Memory eliminates the need for expensive overhead to maintain analytic indexes, and therefore greatly accelerates OLTP operations.
Applications Just Work Faster: Oracle Database In-Memory enables applications to automatically and transparently take advantage of in-memory processing. By simply enabling Oracle Database In-Memory, existing applications are accelerated without requiring any changes. New applications that were previously impractical due to performance limitations can be developed with existing tools in use today. All of Oracle’s industry-leading availability, security and management features continue to work unchanged.
In-memory databases are a game-changer for the standard enterprise RDBMS. While Oracle isn’t the first to market, if the technology works as promised, Oracle’s new feature will make in-memory databases mainstream.
Alan Zeichick, founding editor of SD Times, is principal analyst of Camden Associates.