Mind Matters Natural and Artificial Intelligence News and Analysis
バイナリーコードの背景
Superintelligent AI 1 Adobe Stock licensed

Superintelligent AI Is Still a Myth

Neither the old classical approaches nor the new data scientific angle can make any headway on good ol’ common sense
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Squirrels, it turns out, are superintelligent. Memorizing the locations of sometimes hundreds of buried nuts, squirrels have superhuman recall. They could easily vanquish Nobel Prize winners, if the Nobelists found themselves searching for such buried treasures, head-to-head with rodents.

This is just one example that Kevin Kelly, co-founder of Wired Magazine, uses to explode the myth of a looming superintelligent artificial intelligence (AI). In his bombshell 2017 article (written at a time when AI was on an unmistakable, meteoric upswing), “The AI Cargo Cult: The Myth of a Superhuman AI,” Kelly argued that the concept of machine intelligence and especially “superintelligence” was poorly defined.

Superintelligence assumes a linear apples-to-apples comparison between lesser and greater intelligent systems—organic or mechanical. But in the entire history of scientific and other treatments of the notoriously amorphous concept of “intelligence,” such linear thinking has been rejected. Intelligence extrapolates from the human case, but even with this admittedly suspect logic (and what other logic is available?), we don’t extrapolate consistently or linearly. Hence, the superintelligent squirrel.

Kelly, who in earlier writing called the modern world tantamount to the “intelligenization” of everything (his word), now asked us to consider a core conceit among AI enthusiasts a bit more carefully: AI will be smarter than humans in what way? What do we all mean?

The problem escapes AI scientists—and, too often, the rest of us—largely because, as Kelly implies, it is not taken seriously. It is thought to admit of easy answers. Ray Kurzweil, perhaps the face of futurism in AI and, since 2012, Director of Engineering at Google, has dismissed Kelly’s and others’ concerns as missing the power of exponential acceleration of computing. In what way, you say? In all ways, argues Kurzweil, in his 2005 The Singularity Is Near or in his earlier expositions of AI futurism like The Age of Spiritual Machines (1999).

It’s a formula. To expel worries that superintelligence is a funky concept for technophiles, simply declare that future AI will be super-humanly intelligent by surpassing humans on all measures—greater IQ, social intelligence, emotional and spiritual depth. Name it, computers will just be more-of-that. Simple.

It’s simple in the same sense as a deus ex machina resolved issues in the plot of an ancient Greek play. A god or goddess was lowered via stage machinery (machina) from heaven and straightened out the hopeless messes that mortals had got themselves into. “But why should that be true?” is a proper and oft-heard response among critics. Critics have a favorite objection, ironically made stronger in recent years, as data-driven AI has exploded in popularity: common sense. And here, we can relax. Even if Mom so frequently muttered that we needed more of this mysterious sort of intelligence, it turns out that for AI purposes we are swimming in it. Machines lack common sense.

The problem of common sense in AI is just the problem of programming machines to understand the rudiments of human thought, communication, and activity. A quick litmus test: System A has common sense if it can get the gist of the front page of a newspaper, even if it can’t answer questions about the periodic table or (for that matter) remember two hundred and thirty hiding places in three acres of wooded forest for an assortment of berries and nuts.

Take a famous example from AI pioneer Terry Winograd: “The city councilmen refused the demonstrators a permit because they [feared/advocated] violence.” There’s a verb choice quiz embedded in the sentence, and the task for System A is to select the right one. If System A has common sense, the answer is obvious enough. Strangely, not only squirrels with superhuman memories but advanced AI systems running on IBM Blue Gene supercomputers (who might play fabulous chess), hit brick walls with such questions. The quiz, as originally put by Winograd, so flummoxes modern AI that another AI pioneer, the University of Toronto’s Hector Levesque (above) and colleague Ernest Davis, devised an test for AI based on the “Winograd Schema,” as it came to be called. The focus is on the pronouns in a sentence, for example, “they.” Thus the updated question reads:

The city councilmen refused the demonstrators a permit because they feared violence. Who feared violence?

Readers find it easy to supply the right noun or noun phrase, “the city councilmen.” It’s obvious—it’s just common sense—who else would fear violence?

But now change the verb to “advocated” and the common sense stays, but the answer changes (“the demonstrators”). Winograd Schema quizzes are small potatoes to almost any native speaker of English past the age of (what?) five? ten?. But it repeatedly flummoxes any and all the AI systems that are purporting to be charging inexorably toward superintelligence. It seems like there’s a small problem with the logic here if such systems fail on easy language questions—and they do.

The official Winograd Schema Challenge, organized by Levesque and friends, was retired officially in 2016 for the embarrassing reason that even the well-funded bleeding age Google Brain team performed poorly on a test set of a few hundred questions like Winograd’s original one just mentioned. They achieved roughly 60% accuracy, which might sound like a reasonable showing until one realizes that simply picking the first available answer results in about 50% accuracy. The test asked for a little common sense, that’s all. Google Brain and other competitors discovered quickly that their systems had none.

Perceptive AI scientists and part-time critics like Gary Marcus noted the failures of modern data-science approaches (and any other classic approach to date) early on. In a blistering article he wrote for the New Yorker back in 2013, titled “Why Can’t My Computer Understand Me?”, Marcus sensed that Levesque had it exactly right. Computers don’t have any common sense and the problem doesn’t seem to go away, even when we throw big data, huge quad-core processing power, and fancy machine learning algorithms at it.

Marcus (left) asked a great Winograd-Levesque-inspired question: “Can an alligator run the 100 meter hurdles?” To answer it, one needs only bring to mind the stubby little legs of the great reptile, and juxtapose them with the the height of hurdles on any track for track and field. Very young kids will get it: alligators can’t jump! But the question poses seemingly intractable problems for AI systems, precisely because they lack any ordinary knowledge of things like alligators and hurdles on a track, and “alligator” and “100 meter hurdles” are unlikely terms to pop up in the same sentence or page on the Web. In other words, you can’t use Google to get the answer. You have to think, if even for a moment. You have to have a little common sense.

The Winograd Schema problem is actually a slice of a larger problem known to researchers in Natural Language Processing (NLP). NLP is an important subfield of AI concerned with building computer systems that can process and understand ordinary natural language like English or French. The problem is typically known among linguists as anaphora resolution (where “anaphora” means “reaching back,” as when we reach back to grab the noun phrase that precedes the plural pronoun “they” in the “city council” example above). Among computational types, it is known as the co-reference resolution problem: Two things co-refer to the real object so that “demonstrators” and “they” both refer to a group of demonstrators, angrily demonstrating somewhere. The two groups are describing the same problem and both highlight the weird disambiguation issues that naturally arise in communication.

Humans hardly notice the grand puzzle of what a pronoun like “they” attaches to, as NLP researchers put it, but computational systems have failed and continue to fail miserably at such problems with ordinary language. That is in large part because the basic resolution strategy is noetic—we have knowledge of the attributes of angry mobs, and the cover-your-anatomy worries of city officials confronted with demonstrations. This is all knowledge—ordinary knowledge—we all tend to have, and to have easy access to for thinking and communication. It’s all common sense.

The common sense problem in AI is actually old, pretty much as old as the field itself. Turing himself alluded to it when he posed the problem of having a conversation—the Turing Test—as the proper aim of a matured and successful AI (though the term “AI” is an anachronism here; it was coined later). Old, or classic AI—all the work on AI done before the Web, basically—tried to tame common sense by adding concepts and rules to reason about concepts, known as “Knowledge Representation and Reasoning.” This may seem silly today, but it’s plausibly a more intuitive strategy for tackling common sense. One builds a large knowledge base with descriptions of alligators and races and legs and measurements and so on. The classic system receives a query about this thing (alligator) running this race (100 meter race), looks up the relevant concepts, analyzes the action required “to run,” which introduces worries about stubby legs, and lo! and behold, the old system shakes its head: I’m sorry, Dave. No.

Researchers had learned by the 1970s and certainly by the failure of the Japanese Fifth Generation project in the 1980s—a huge Knowledge Representation and Reasoning project funded to the tune of $400 million—that common sense was too tough a nut to crack with explicit data structures representing things in the world and rules to reason about them.

Enter the Web. The Web quickly deposited tera- and then exabytes of textual data representing people talking about stuff in the world and it quickly occurred to erstwhile AI and NLP scientists that Web pages might work better than manual knowledge base efforts. But, as with the older attempts (and to some extent even more hopelessly), the problem of common sense turned out to be irreducible to massive data crunching. Part of the challenge I’ve mentioned is that questions that contain mentions of typical objects that occur infrequently together in a text hamstring statistical approaches that look for mega-frequencies (there aren’t any).

But part of the problem is ironic; it resuscitates the need for more robust treatments of knowledge and reasoning that modern efforts have ignored and pooh-poohed. We need some new approach to make headway on the most basic problems in understanding language and the world, but neither the old classical approaches nor the new data scientific angle can make any headway on good ol’ common sense.

Back to Kelly. If we’re making such pitiable progress on the rudiments of intelligence as we know it, in what sense are we on the way to a super-version of intelligence? Kelly suggests that we’re building technology that’s good at numbers and data intensive work but lousy at the human-intensive stuff. In other words, We might end up with the computational equivalent of a squirrel though in this case we already have fantastically reliable and indefatigable calculators.

But however AI continues to change and progress, solving a variety ofproblems once thought unsolvable, it does seem steadfastly stuck on common sense. And, as many pundits and critics are beginning to realize, a world full of fast computers with zero common sense is dangerous. Not only do self-driving cars evoke fears of robotic idiocy gone awry to great human detriment, but recent fears about AI systems stupidly manipulating prices on Amazon or the stock market, or (even worse) systems entrusted with predicting recidivism rates among criminals or targeting neighborhoods for patrols all would seem to require a dose of ordinary common sense intelligence to realize when something is wrong.

Entrusting decisions to computers without common sense is itself an example of being too light on common sense. AI has come full circle, one might say. Superintelligence is not the problem. It’s boring, ordinary intelligence. And that is a problem for AI indeed.


Further reading from our Analysis desk:

Can AI help Hollywood predict the next big hit? AI analysis sifts the past finely. But how well does the past predict the future?

The Golden Age of the Web?— A Dissent What happened to the collaborative culture, decentralized markets, and wisdom of crowds that bestsellers prophesied fifteen years ago?

We built the power big social media have over us Click by click, and the machines learned the patterns. Now we aren’t sure who is in charge

Futurism doesn’t learn from past experience. Technological success stories cannot be extrapolated into an indefinite future


Mind Matters Analysis

Superintelligent AI Is Still a Myth