Regarding Andrew Binstock’s column (“The Intractability of Parallel Programming”), the actor model has distinct similarities with dataflow architectures and dataflow analysis theory, dating from the 1970s. (See C.A.R. Hoare’s “Communicating [Asynchronous] Sequential Processes” paper and Tom DiMarco’s book on design by dataflow.)
In particular, the independent “receive/compute/send” model, the lack of need for explicit low-level synchronization, and automatic concurrency (what the article’s author appears to be lusting after) have been provided for by most dataflow architectures since the early 1980s. For an automatically (from the programmer’s perspective) auto-synchronized dataflow-driven application framework, with fan-out and fan-in, see the POSIX/UNIX process I/O model (not the more recent POSIX threads model).
I know there are times when sharing an address space (i.e., threads), in whole or in part, is more time-efficient than piping data packets between processes (which each have distinct address-spaces). But in many, many cases, the implied “copy” can be partially (UNIX kernel) or fully elided by the runtime system. Also, the cost of inter-address-space context switches is higher than for intra-address-space context-switches, and might be intolerable on 200-core machines with a shared globally coherent memory bus, but for the present, it is not an issue for most applications.
In the future, globally coherent memory-bus system hardware architectures are likely to be in competition with explicit message-passing hardware architectures, for that and many other reasons.
In summary: Multi-threaded is not always the best way to get concurrency; multi-process with auto-synchronized dataflow is often easier and more reliable (and provides more reliable recovery from partial failure modes), and is considerably easier to distribute across a network. A huge number of developers have been (sometimes unwittingly!) been using such a concurrency model for decades already.
Mike Spooner
United Kingdom