The shipping version of Microsoft’s flagship languages, C# and Visual Basic, represent the philosophy of “static typing where possible, dynamic typing when necessary.” That phrase, the title of a brief but influential 2004 paper by Erik Meijer, argued that the mainstream was clamoring for the behaviors of functional programming, which has done a wonderful job of hiding its benefits in a fog of intimidating jargon (contra- and co-variance, higher-order functions, parametric polymorphism, etc.).

The most visible success of the philosophy is Language Integrated Query (LINQ), which can be pitched in an elevator as “It’s like putting SQL in the language,” but it is an effective training ground for a number of important functional programming concepts. But the “language integrated” part of LINQ is only possible because the C# and Visual Basic languages were modified to support concepts for manipulating functions easily.

One of these concepts is the idea of a “closure,” which, simply put, is a function that “captures” variables in its enclosing scope and has become a commonplace thing to users of LINQ. A “continuation” is a closure that is called by a function when that function is done with its calculation. This is often contrasted with the traditional idea of a functional “return,” but it may help to think of a return as a limited form of continuation, one whose behavior is “start executing at the place where you called the function.” What if such a “return continuation” was only one of several options for what happens at the end of a function?

Rather than continue throwing out jargon, think of the asynchronous programming APIs that have been introduced in more recent .NET libraries, such as the asynchronous I/O libraries. In those libraries, you use a callback or a closure to specify how your logic “continues” after some perhaps long-running process (network or file I/O). Voila: continuations.

If you’ve worked with those libraries, you know that the benefits of asynchrony come with a certain amount of complexity: It can be a little confusing creating the continuation, and your code blocks can be confusing (“Wait, is that bracket defining a loop or a function boundary?”). Enter C# 5.0.

The new keywords async and await are entirely about continuations. A method marked with async is rewritten by the compiler in “Continuation Passing Style” (a style so important It Must Be Capitalized!) which, as we just talked about, centers on the idea that when a function ends, it might perform a “return continuation,” or it might call another type of continuation (in other words, it might call another function, which is in fact a closure, which means that it’s captured the variables that it needs).

The rewriting happens at every await keyword: It basically means, “If the thing that I’m awaiting is not yet ready, turn the rest of this async function into a continuation and call it when the awaited thing is ready.”

The keywords are, admittedly, somewhat counter-intuitive. The async keyword doesn’t mean, “This method does asynchronous stuff.” The await keyword does not mean “block” (nor does it mean “accept an IOU as a proxy for a future value.”). What the keywords mean is that the compiler is going to produce something with a very different structure than is implied by the intuitive indents and outdents of your flow-of-control statements. The single logical function you write will be rewritten as a series of functions, the boundaries of which are potential starting and stopping points for asynchronous calculations.

It’s confusing on first reading (maybe on second and third readings, too). But the brilliant thing is that it’s not nearly as confusing as it would have been a decade ago. Anders Hejlsberg, the lead architect of C#, is the J.K. Rowling of language design (I’d go further and say he’s the Patrick O’Brian of language design if the great Aubrey-Maturin novels were more widely known). It’s not just that he’s had a plan from the beginning (the delegate keyword was a bone of contention in the Sun-Microsoft Java lawsuit of the 1990s), it’s that he’s introduced the complexity gradually.

Ask your average non-Microsoft programmer, and they’ll tell you that C# is “like Java” or even a “ripoff” of that language. Whatever amount of truth that had in 2001, it’s flat-out wrong in 2011. C# has moved through “static when possible, dynamic when necessary,” to “object-oriented when possible, functional when necessary.”

It’s taken C# a decade of iterative changes, each helpful in their own right, to get to this point. But think about what you would have said a decade ago when told that “the compiler’s going to rewrite your function so that it’s a whole bunch of smaller functions, and it’s going to create a whole bunch of variables and assignments that don’t really map one-to-one to the tokens in your code.” Would that have sounded like a mainstream C-derived language to you? Would people have accepted it, or freaked out about performance and how important it was that the compiler “do what’s it told”?

Visual Basic, which famously alienated its users with its move to object-orientation and .NET, is an unfortunate object lesson in what can happen when language designers move too fast for their audiences.

Will the next version of Microsoft’s mainstream languages (I believe that VB will have the same capabilities, but I am not sure of the details) lead to some confusion? Yes. Will there be dangerous aspects? Yes. (I don’t even want to think what an exception stack trace is going to look like.) Is it a silver bullet that makes manycore programming trivial? Absolutely not.

But does it show that Microsoft can still produce software that is both excellently designed and pragmatically useful? Absolutely yes.

Larry O’Brien is a technology consultant, analyst and writer. Read his blog at