Mind Matters Natural and Artificial Intelligence News and Analysis
cute-handmade-reborn-baby-doll-stockpack-adobe-stock
Cute handmade reborn baby doll

Is GPT-3 the “Reborn Doll” of Artificial Intelligence?

Unlike the reality doll collectors, GPT-3 engineers truly believe that scaling up the model size will suddenly cause GPT-3 to think and talk like a real human
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

There is a worldwide community that collects “reborn dolls.” These dolls look almost like real babies. Look again, closely, at the featured photo above…

They help some collectors cope with the loss of a child. For others, it fulfills their sense of self image. And yet others just see them as a quirky hobby.

Regardless of how much the baby dolls mimic the appearance of real dolls, the dolls will forever remain copies because the external appearances are not generated by biological processes. For the collectors, this is a feature, not a bug. They enjoy the appearance of a baby without the real life difficulties of raising a real person. As one collector comments, her doll “doesn’t turn into a teenager who wants an iPhone 11.”

What do reborn dolls have to do with GPT-3? In both cases, people are mimicking a real thing: babies and language learning. The difference is the reborn doll collectors don’t believe that by making the dolls look more and more lifelike, they will suddenly cross a threshold where the doll becomes a real baby. Not so with the GPT-3 engineers. They believe that the difference between their model and real language learning is just a matter of scaling up the model size. At some size, the model will become indistinguishable from human language learning:

The success of GPT-3 has been put down to one thing: it was bigger than any AI of its type, meaning, roughly speaking, that it boasted many more artificial neurons. No one had expected that this shift in scale would make such a difference. But as AIs grow ever larger, they are not only proving themselves the match of humans at all manner of tasks, they are also demonstrating the ability to take on challenges they have never seen.

As a result, some in the field are beginning to think the inexorable drive to greater scales will lead to AIs with abilities comparable with those of humans. Samuel Bowman at New York University is among them. “Scaling up current methods significantly, especially after a decade or two of compute improvements, seems likely to make human-level language behaviour easy to attain,” he says.

Mordechai Rorvig, “Supersized AIs: Are truly intelligent machines just a matter of scale?” at New Scientist (October 6, 2021)

Is this a reasonable assumption?

Let’s return to the reborn doll. As we noticed, the reason why the collectors are not under the misconception that they are getting closer and closer to producing a real baby is that they know that real babies are generated by biological processes that are fundamentally unlike the processes the manufacturers use to make the dolls.

What makes these biological processes fundamentally unlike the crafters’ processes? The most significant difference is that the biological processes are intrinsic to the human baby, while the crafters’ processes are extrinsic to their product. If the crafter stops crafting, the doll stops developing.

If we look at the difference between GPT-3 and human language learning, we see the same fundamental difference.

Human language learning is based on an intrinsic process. The very “model,” human intelligence, that is learning the language is responsible for the development of the model. Not so with GPT-3. Creating the GPT-3 model depends on at least three different tasks:

  1. picking the correct neural network model (of which there are many)
  2. picking the right training method (of which there are many)
  3. picking the right training data (of which there are many)

Who makes all these selections? Not the GPT-3 model. It is the human engineers who, like the crafters of the reborn dolls, are all external to the model who making all the decisions that result in the final GPT-3 model.

If we doubt that the reborn dolls will ever become real babies, why should we expect a different outcome with the GPT-3 language model?

The only evidence presented in the New Scientist article for why we should expect a larger GPT-3 model to eventually resemble human language learning is that researchers saw an improvement when they built a bigger model. If you are willing to overlook a lot of nonsense GPT-3 generates, some sentences seem humanlike if you squint right. A bigger model gives them more to work with.

But if the reborn crafters get better materials to craft the dolls, those materials do not move the dolls closer to being real babies. It’s the same with GPT-3. What seems most likely, based on the comparison with the reborn dolls, is that the engineers have only improved the model’s ability to mimic certain aspects of human language, and are still no closer to creating a language learning algorithm.


You may also enjoy:

Did GPT-3 really write that Guardian essay without human help? Fortunately, there’s a way we can tell whether the editors did the program’s thinking for it. Graphing the cross entropy, as I have done, shows huge changes where the editors organized the machine output into coherent thoughts. (Eric Holloway)

and

New AI can create and detect fake news. But how good is it at either task? We tested some copy.


Eric Holloway

Senior Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Eric Holloway is a Senior Fellow with the Walter Bradley Center for Natural & Artificial Intelligence, and holds a PhD in Electrical & Computer Engineering from Baylor University. A Captain in the United States Air Force, he served in the US and Afghanistan. He is the co-editor of Naturalism and Its Alternatives in Scientific Methodologies.

Is GPT-3 the “Reborn Doll” of Artificial Intelligence?