AI

By Tyler Schnoebelen, June 16, 2016

An AI Springtime

Earlier this week, I talked about the major themes in how the press has been covering artificial intelligence since 2015. This post puts the recent months in context, looking at whether we are in an AI spring time (tl;dr: we are) and when the AI winter(s) were. A big theme is going to be hype-and-disappointment, so we’ll close on “are we in hyperhype right now?”

To begin, when did spring start? The NOW corpus we looked at last week makes this really easy to see—the lines in the image below show how frequent the phrase artificial intelligence is per million words in six month chunks of time.

Years

Figure 1: The phrase “artificial intelligence” in the NOW corpus—there’s a lot of press these days

The big jump seems to have happened in the first part of 2014, leading to a banner year in 2015. And the first half of this year scores even higher in mention rates.

GoogleNgrams

To go further back in time, we can check out Google Ngrams, which visualizes the data inside Google Books. In that corpus, you’ll see a peak in 1988 for artificial intelligence, with a steep drop off after that (the corpus only reports up to 2008). We’ll come back to what was happening in the late 1980s at the end of the post. Let’s keep time traveling.

Early days and early problems

The first use of artificial intelligence is usually attributed to John McCarthy in the mid-1950s. You can find a couple early examples of it used in 1953 here and here, but it was a conference at Dartmouth that historians of AI usually talk about as the start.

The summer at Dartmouth in 1956 had collaborative goals around topics that are still relevant today—for example, natural language processing, abstracting from sensory data, and have machines perform self-improvement activities. But there wasn’t actually that much collaboration. Researchers arrived and left at different times and mostly stuck to the projects they came with.

In 1966, a report was published on machine translation by the US’s National Academy of Sciences. What the group really seemed to want was practical English-Russian machine translation. A quick summary of the report is something like: It’s the 60s! 76% of scientific articles are in English. Instead of trying to get machines to translate Russian science, just spend 200 hours learning Russian.

The 1966 report put the kibosh on a lot of funding for machine translation until resurgence in the 1980s led by Japan. Today, you’ve no doubt encountered Google Translate and Bing Translator. And you’ve probably found them some combination of fun, useful, and lacking. They use statistical methods based on a whole lot of data. That’s an approach common in AI today but as you can imagine, the scale of data and processing power was a fraction of what it is now. As we’ll see in a bit, while statistical/probabilistic methods came into other topics of computational linguistics in the late-1980s, statistical machine translation wasn’t in a lot of the academic conferences/papers until 2000.

The other famously negative report on AI happened in 1973. The UK’s Science Research Council commissioned Sir James Lighthill to assess the field of AI from the outside. Sutherland was an expert in fluid dynamics and as it happens, an adventure swimmer—in the Tyrrhenian Sea north of Sicily, he swam around the island of Stomboli while its volcano erupted 14 times. He would later die swimming around an island in the English Channel.

Lighthill divided the field of AI into three parts: work on automation, work on central nervous systems/biology, and a “bridge” group between them that was really about robots. Lighthill’s writing has just over 10,000 words. 21 of them are some version of disappointing. There’s also a depressingly, a doubt, and five discouraging‘s. And that’s just the d’s. Most damning was his evaluation of the field as incoherent and there not being enough meaningful results in robotics.

The report features replies from others, including Stuart Sutherland. Sutherland, a commanding figure in British psychology, wasn’t known to suffer fools gladly. He wrote in defense of basic AI research (and robots), what it had already achieved and why it was worthwhile.

Sutherland wrote his rebuttal during a time in his life that he would document a few years later in his book Breakdown: a personal crisis and medical dilemma. (If you have an interest in psychotherapy, you can check out a back-and-forth about that here). And while Lighthill uses the word human nine times to Sutherland’s one, what Sutherland is concerned with is specific humans: workers in AI who might make progress in automation or in biology but need a bridging category for pushing concepts along.

Sutherland was specific about what Britain needed to do: send more researchers to the US and bring them to Britain. And there was also the matter of the hardware. You really needed a DEC System 10 (also called a PDP 10 in the report). You know, this kind of thing:

Hardware in lieu of the cloud computing software

You couldn’t send information up into the cloud and have it computed as you can today.

As I mentioned earlier, one of Lighthill’s leitmotifs was disappointment, which he said was true even from the standpoint of researchers themselves. He reports about the problem of wild and inflated predictions about what AI could do. For Lighthill, the problem was essentially one of combinatorial explosion: a self-organizing system can deal with the complexity of a tabletop of blocks or checkers, but give it the world and there are too many possible ways to group things for software to figure out and for hardware to process.

Chess is well-known as a game where possibilities explode: there may be about 10^40 possible chess games of 40-moves or fewer.  At the point that Lighthill was writing, computers were strong amateur players at chess. It wasn’t until 1997 that Deep Blue beat chess Grandmaster Garry Kasparov. The big news this year was Google’s AlphaGo winning at the game of Go, which has been considered an even more formidable challenge: for example, you have 20 possible initial moves in chess, in Go you have 361. AlphaGo’s wins have been a huge part of the press on AI this year.

Let’s circle back to that peak in the late 1980s and the slump after it. Popular press mentions do not show any single thing happening. But the books being published included Marvin Minsky’s The Society of Mind (about intelligence as agents) and Eugene Charniak and others’ second edition of Artificial Intelligence Programming.

In computational linguistics journals and conferences, meanwhile, papers on probabilistic models of language started to emerge in force in 1988. In 1987, you also see at least one substantial piece of press coverage on neural nets from the New York Times.

But in the mainstream press, you also get reports like Andrew Pollack’s on March 4, 1988 titled “Setbacks for Artificial Intelligence“. Like Lighthill before him, Pollack identifies one of the problems as hype: specifically around understanding natural language, recognizing objects, or reasoning whether to give someone a loan or not. Pollack also identifies the problem as one of hardware:

Corporate customers did not want to spend $50,000 to $100,000 for a special machine used by one person. They wanted artificial intelligence programs to run on their existing computers, such as I.B.M. mainframes and Digital Equipment minicomputers, to be shared by many users. Preferably, they wanted to develop artificial intelligence programs without requiring their own programmers to learn Lisp.

John Markoff—who is still writing for the New York Times on artificial intelligence—also wrote about fading optimism a few months later, in May of 1988. He talks more specifically about the problem of expert systems back then: when you had to have computer scientists translate human specialists into programs. But it tended to work, as Lighthill had found in 1973, on very narrow problems like diagnosing malfunctioning electronic equipment. He gets a great quote from one AI executive that hits at the heart of the problem: “We don’t make artificially intelligent machines in much the same way that the Boeing Company doesn’t make artificial birds.”

About hyper-hype

Could we be in a hype cycle now? Well, the uptick in mentions of artificial intelligence certainly shows that the term has jumped in popularity. Occurrences of the phrase are at 36.19 mentions per million words. Gartner recently removed big data from its hype cycle report of emerging technologies. In the NOW corpus, big data peaked in the second-half of 2014 but even at that maximum it was only 12.81 mentions per million words. The term artificial intelligence has been around a lot longer and permeates the imagination of more than just technologists.

There are still people who are interested in building Commander Data’s positronic brain. But even they tend to think about the work in front of them as having more modest steps. So what you see are efforts in self-driving cars and customer support routing—these are narrower in scope than creating artificial humans. And they benefit from hardware that really is appreciably different than the past—there’s more data, there’s more capacity to crunch, and while we probably don’t want to say that AI itself is democratized, resources are much more widely available.

When Lighthill used the following phrase, he felt like it was cliche: “one swallow does not make a summer”. This phrase really peaked in the 1920s so it may be new to you as it was to me. Lighthill was talking about a single impressive dissertation that even he was quite taken with. But his point was that one great paper isn’t enough.

What seems to predict a winter is hype that is out of line with reality, where promises and predictions keep coming but with very few results. Nevermind that the promises and predictions are rarely from researchers, they circulate and disappoint. The current hype around AI tends to be much more narrow in its focus and if you’re paying attention, there are a fair number of swallows. If grander and grander claims are made, that’s when the air will start to feel like autumn. And perhaps if some of the swallows turn out to be drones.