Yes, actually, as Harvard cognitive psychologist Elizabeth S. Spelke has noted:
According to her, while infants are no match for AI, there are things that they can do beyond the reach of AI. Despite being terrible at labeling images, hopeless at mining text, and awful at a videogame, just after few months, they start to understand how the physical world works and grasp the foundations of language, such as grammar. And a couple of years later, they can extract knowledge, recognize objects, employ cognitive thinking, extrapolate motion, develop mathematical skills, understand the cause and effect of things around them, acquire abstract concepts from its surrounding. This is what surprises Spelke and other experts pondering about how babies learn. Finding this can help us design better AI.Preetipadma, “How intelligent is artificial intelligence?” at Analytics Insight
But teaching AI to do that is easier said than done because it is bound up with the hard problem of consciousness.
Prominent AI engineer François Chollet (right) pointed out in a recent open access research paper that adaptability is what characterizes human intelligence. We generalize from the known to the unknown without prior programming and do not get stuck very often in endless feedback loops. He writes,
… in practice, the contemporary AI community still gravitates towards benchmarking intelligence by comparing the skill exhibited by AIs and humans at specific tasks, such as board games and video games. We argue that solely measuring skill at any given task falls short of measuring intelligence, because skill is heavily modulated by prior knowledge and experience: unlimited priors or unlimited training data allow experimenters to “buy” arbitrary levels of skills for a system, in a way that masks the system’s own generalization power.
That is, the programmer leverages the vast computing power to play a winning game of chess by calculating all possible moves but, as Oxford mathematician John Lennox explained recently here, “Deep Blue became the world champion at chess, but it cannot even play checkers, let alone drive a car or make a scientific discovery.”
In the paper, Chollet examines various definitions of intelligence and proposes a new formal definition, Algorithmic Information Theory, in which he describes intelligence as “skill-acquisition efficiency,” “highlighting the concepts of scope, generalization difficulty, priors, and experience, as critical pieces to be accounted for in characterizing intelligent systems”:
Using this definition, we propose a set of guidelines for what a general AI benchmark should look like. Finally, we present a new benchmark closely following these guidelines, the Abstraction and Reasoning Corpus (ARC), built upon an explicit set of priors designed to be as close as possible to innate human priors. We argue that ARC can be used to measure a human-like form of general fluid intelligence and that it enables fair general intelligence comparisons between AI systems and humans.
It will be helpful to compare his carefully thought-out claims with the sort of thing one reads in pop science media, characterized by our Top Ten AI hypes of 2018 and 2019 (i. e., “AI will replace scientists!” “AI Can Write Novels and Screenplays Better than the Pros!”, etc.)
For one thing, commentators will need to meet a higher standard when making claims and predictions about AI intelligence.
Other articles you might enjoy on the limitations of AI:
Thinking machines: Has the Lovelace test been passed? Surprising results do not equate to creativity. Is there such a thing as machine creativity?
The flawed logic behind thinking computers There is another way to prove a negative besides exhaustively enumerating the possibilities. (Eric Holloway)