The AI Bogeyman cackles a frightening refrain: “AI will take your job. AI will take your job. AI will take your job.” And now John Seabrook, a staff writer at the New Yorker, has heard the cackle and wonders if his job is next.
Seabrook ponders the possibility in his essay “ The Next Word” in which he addresses “predictive text.” In this context, “predictive text” is the feature of an AI—Seabrook considered the OpenAI GPT-2 (Generative Pretrained Transformer) system—where the machine “predicts” what text “should” come next given what it’s seen so far. Think of it as completing someone’s sentences, only pushed to the point of completing their thoughts, paragraphs, and essays.
Using an AI to generate something instead of recognizing something began soon after systems could identify pictures of cats. In a sense, researchers take a trained AI and turn it upside-down so that input becomes output and output becomes input. Such a system could, for example, take the phrase, “cat in a tree,” and build the appropriate photo.
A fun, if disconcerting, example is the site ThisPersonDoesNotExist.com.. It uses “generative” AI to produce images of people who do not exist. Other examples include the AI systems reported recently in the media that produce “paintings,” a field started by Alexander Mordvintsev at Google in 2015 when it released Deep Dream.
The OpenAI system Seabrook used is also not the first AI text generator. Seabrook dabbled earlier with AI-created text when he used Google’s “Smart Compose” feature to quickly complete an email to his son:
“Finally, I crossed my Rubicon. The sentence itself was a pedestrian affair. Typing an e-mail to my son, I began “I am p—” and was about to write “pleased” when predictive text suggested “proud of you.” I am proud of you. Wow, I don’t say that enough. And clearly Smart Compose thinks that’s what most fathers in my state say to their sons in e-mails. I hit Tab. No biggie. And yet, sitting there at the keyboard, I could feel the uncanny valley prickling my neck. It wasn’t that Smart Compose had guessed correctly where my thoughts were headed—in fact, it hadn’t. The creepy thing was that the machine was more thoughtful than I was.”John Seabrook, “The Next Word” at New Yorker
Seabrook correctly recognized that the machine had guessed wrong; but he misses the mark in saying the “machine was more thoughtful than I was.” The machine’s text was wrong, not because it was more thoughtful, but because it is utterly thoughtless; statistics, and nothing more, yielded the response.
Seabrook includes in his essay examples from GPT-2 that were produced by taking what Seabrook had written to that point. Here’s what GPT-2 generated, starting at roughly the article’s halfway point:
A long time ago, the whole world could have said that it lived in a golden age of machines that created wealth and kept peace. But then the world was bound to pass from the golden age to the gilded age, to the world of machine superpowers and capitalism, to the one of savage inequality and corporatism. The more machines rely on language, the more power they have to distort the discourse, and the more that ordinary people are at risk of being put in a dehumanized social category.” (Seabrook — GPT-2 generated text)John Seabrook, “The Next Word” at New Yorker
The thoughts do not improve. And, needless to say, that is not what Seabrook wrote next.
Seabrook mistakenly reasons that, because AI has achieved advances in other areas (such as winning at complex video games), its writing “skill” will inexorably advance:
t if some much later iteration of GPT-2, far more powerful than this model, could be hybridized with a procedural system, so that it would be able to write causally and distinguish truth from fiction and at the same time draw from its well of deep learning? One can imagine a kind of Joycean superauthor, capable of any style, turning out spine-tingling suspense novels, massively researched biographies, and nuanced analyses of the Israeli-Palestinian conflict. Humans would stop writing, or at least publishing, because all the readers would be captivated by the machines. What then?”John Seabrook, “The Next Word” at New Yorker
We’ll just have to guess what happens then. GPT-2, even as its quality improves, does what all generative AI systems do: It regurgitates. It “creates” by spewing a mangled selection of its training data. It does not produce new thoughts. It does not have new ideas. And it certainly does not come up with new insights.
Writers create metaphors and similes. They build story arcs and narratives. They analyze and tie disparate observations together in new ways. Good writers do not spew and they, like good Jazz players, do not regurgitate.
Seabrook’s job is secure unless his or The New Yorker’s standards deteriorate to the point that they no longer value thought and insight. But, who knows, that might happen.
More from Brendan Dixon: on AI and the arts, especially jazz:
Fan tries programming AI jazz, gets lots and lots of AI… Jazz is spontaneous, but spontaneous noise is not jazz
AI can’t do jazz because spontaneity is at jazz’s core. AI “artists”—in all the forms presently available — merely replay their programming.
Could AI authentically create anything?
AI creates kitsch, not art