Machine learning is, regrettably, not one of the day-to-day chores assigned to most programmers. However, with data volumes exploding, and high-profile successes such as IBM’s Jeopardy-beating Watson and the recommendation engines of Amazon and Netflix, the odds are increasing that ML’s opportunity might knock on your door one day.

From the 1960s to the 1980s, the emphasis of artificial intelligence was in “top-down” approaches in which expertise from domain experts was somehow transcribed into a fixed set of rules and their relations. Often, these would be a series of small “if-then” rules, and the “magic sauce” of expert systems was that they could draw conclusions by automatically chaining together the execution of those rules whose “if” parameters were known. The technology for inferencing worked well enough, but it turned out that very large rulebases were hard to debug and maintain, while not very large rulebases didn’t produce many compelling applications (for instance, my expert system for identifying seabirds failed to make me a billionaire).

The late 1980s saw a shift toward algorithms influenced by biological processes, and the rebirth of artificial neural networks (which were actually developed in the early 1960s), genetic algorithms, and such things as Ant and flocking algorithms. There was a flurry of interest in fuzzy logic, which was particularly well suited for control systems, as they provided continuous response curves.

The 1990s saw increasingly sophisticated algorithms in all these areas and began the march toward today’s world of machine learning, with its emphasis on statistical techniques used against large datasets. Perhaps the most significant development was the invention of support vector machines, which provide a robust way to determine hyperplanes that effectively bisect high-dimensional solution spaces.

As that last sentence demonstrated, it doesn’t take long before the techniques of AI and ML flirt with becoming unintelligible. A sentence can often be a confusing mashup of mathematics, jargon from AI’s five-decade history, and flawed metaphor (artificial neural networks aren’t a great deal like real-world neurons, and genetic algorithms don’t have much in common with meiosis and DNA recombination). But while there is a temptation to use a technique as a black box, I strongly believe that sustained success requires gaining an intuition into the underlying technique. That intuition doesn’t need to be an academic-level understanding of the mathematics, but it does need to be at a level where you can make reasonable guesses as to what type and volume of data you need, why and what kind of preprocessing you might need, and what problems are likely to crop up during processing.

Neural nets and genetic algorithms were hot topics when I was editing “AI Expert,” but 20 years later, the most common StackOverflow.com questions about these techniques treat them as black boxes and often reveal misguided use cases. Genetic algorithms, in particular, seem to be woefully misunderstood. If you’re thinking about solving a problem with a GA, please buy a copy of Goldberg’s “Genetic Algorithms in Search, Optimization, and Machine Learning” or Mitchell’s “An Introduction to Genetic Algorithms,” and spend two days reading before you begin coding. I guarantee your results will come faster than if you don’t understand the model.

About Larry O Brien