How can programmers benefit from the “the year of Neural Nets”? Statistical machine learning techniques have been surging in popularity in academic settings for years, but 2015 was a watershed in terms of industry awareness and deployment. It was not long ago when the term “Deep Neural Networks” seemed about as dubious an explanation as “Applied Phlebotinum,” but now, Google’s open-source release of TensorFlow was the source of celebration and rewriting of business plans. (Although the consensus seems to be that Theano may hold multiple advantages.)

It’s important to understand that neural networks are not on the threshold of replacing traditional programming techniques, much less general intelligence.

Neural networks do not do any symbolic processing and are neither “logical” nor “creative.” Neural networks are “universal approximators,” meaning that they can generate arbitrarily accurate results for a given function. The inputs are swept forward through massive numbers of amplifier weights and transfer functions, and the output is the (surprisingly effective) mapping of this gestalt into meaningful results. The “intelligence” in artificial neural networks stems from the manner in which the amplifier weights are trained and the specific topology used to map inputs to outputs (for instance, the use of recurring sliding windows over small sections of an image).

(Related: How IBM is using Watson beyond winning at trivia)

While advances in recent years in how the networks are configured and trained have led to fantastic results, there are limits to what can be done if (at runtime) the process is strictly feed-forward. We are all walking proof that sophisticated symbolic processing can arise on a substrate of deeply interconnected neurons whose behavior has rough analogies to the weights-and-transfer functions of artificial neural nets, but it is certain that complicated feedback modes are required for anything remotely in the realm of “higher-level thinking.” Understanding such modes is almost entirely speculative and far beyond the state of the art in artificial neural networks.

All of which is to say that artificial neural networks and other statistical machine-learning technologies are not going to start competing for software development jobs soon. Advances in symbolic processing seem to be languishing, except, perhaps, for IBM’s Watson project, which seems to be establishing a coherent niche for integrating large-input corpuses to produce correct answers.

Data does not influence the way we develop software. There is virtually no development “best practice” based on statistically meaningful data (the exception being “shorter user-feedback cycles are better,” which I feel has been amply supported). And yet, two of the most popular websites in the world are potential treasure troves: Stack Overflow (57th in world, according to Alexa.com) and GitHub (85th). And it’s hard to imagine a task better suited to monitoring for analytics than the computer-based task of writing code (always bearing in mind that this is just a subset of the job of software development).

It’s easy to write scripts that process Stack Overflow data by tag or text. While this can be immediately helpful in identifying popularity trends, this is barely scratching the surface, given that Stack Overflow is literally a database of cognitive stumbling blocks. Every question on Stack Overflow is a testament to something not being clear (to return to the topic of my last column).