Transactional memory may be the most understandable of the potential “silver bullets” for the highly parallel manycore world we’ve entered. Everyone is familiar with the concept of database transactions, at least at the basic level of “if there’s a conflict, roll back to the boundary.” Many SD Times readers will be aware of how much more complex things can get with nested transactions, retries and guarantees, but nonetheless, the transaction concept is about as straightforward an idea as you have in the area of concurrency. “Transactional memory” applies the idea to the memory conflicts that arise in parallel programming.

The reason this has “silver bullet” potential is that, by far, the single biggest problem in parallel programming is shared, mutable state. Shared mutable state is to concurrency as raw pointers are to memory management, a capability that, yes, has performance benefits and that, yes, can be tamed if the context is controlled very tightly.

As with manual memory management, the problem is that context isn’t very tightly controlled in the real world, and libraries and hastily patched code are sometimes not airtight. And just as pointer bugs can be notoriously difficult to track down, appearing and disappearing with little predictability, so too with concurrency bugs. (If anything, concurrency bugs can be even more fickle and intermittent.)

It’s worth emphasizing that the problem with shared mutable state requires both conditions: a value that never changes that can be shared across threads in perfect safety once it’s been assigned, and everyone being familiar with variables that, within a single thread, change their values over time. (Functional programmers will argue that even within a single thread, one should prefer immutable values to variables, but that’s a discussion for another time.)

However, with shared mutable state, if you do not have higher-level constructs to enforce rules, you can never rely on a variable having the value you expect. Perhaps another thread came along and modified it, even if you just assigned the variable in the previous line of code. This is the same problem whether you’re talking about a screen coordinate in memory or a bank balance in a database.

The answer, in databases, is to add a higher-level construct, the transaction that delimits a logical work unit: “Begin here, end here, and guarantee that the variables I use in between are consistent.” Those with gray hairs will remember databases where such commands would lock the entire row or table; indeed, the performance and deadlock problems that arose are how many of us acquired some of those gray hairs. Today, it’s much more common for databases to use “optimistic locking.” With optimistic locking, the shared values that are to be read or written are tracked over the course of the transaction. If they have not been modified, the transaction succeeds.

About Larry O Brien