Mind Matters Natural and Artificial Intelligence News and Analysis
robot-hand-pressing-computer-keyboard-enter-stockpack-adobe-stock
Robot hand pressing computer keyboard enter
Robot hand pressing computer keyboard enter

English Prof: You’ll Get Used To Machine Writing — and Like It!

Yohei Igarashi argues that seamless machine writing is an outcome of the fact that most of what humans actually write is highly predictable
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

English professor Yohei Igarashi, author of The Connected Condition: Romanticism and the Dream of Communication (2019), contends that writing can mostly be automated because most of it is predictable:

Instances of automated journalism (sports news and financial reports, for example) are on the rise, while explanations of the benefits from insurance companies and marketing copy likewise rely on machine-writing technology. We can imagine a near future where machines play an even larger part in highly conventional kinds of writing, but also a more creative role in imaginative genres (novels, poems, plays), even computer code itself.

Yohei Igarashi, “The cliché writes back” at Aeon (September 9, 2021)
Yohei Igarashi

Currently, humans’ ability to guess whether it is machine writing, he says, is only a little better than chance.

How does machine writing work? The best-known model, GPT-3, was trained to write by analyzing 500 billion words, absorbing both their usual meaning and where they appear in grammatical structures (syntax).

One outcome of such a process is predictive text. Our e-mail or cell phones can save us time by suggesting words or phrases because they appear so often in ordinary language that they are likely to be correct. For example, “Your suit will be ready at 5:00 pm” is more likely to be followed by “Thank you” (a choice offered) than “Keep it. I’ve decided to give up wearing suits” (a choice one must type in).

There are limitations on machine writing. Scarfing the internet instead of books, the machines absorb a great deal of rant and drivel that no publisher would likely be interested in. But more than that:

… such models have no actual knowledge of the world, which is why a language model, in possession of a great deal of information about word sequence likelihoods, can write illogical sentences. It might suggest that if you’re a man needing to appear in court, but your suit is dirty, you should go in your bathing suit instead. Or, while it knows that cheese in French is fromage, it doesn’t know that cheeses don’t typically melt in the fridge. This sort of knowledge, on which language models are tested, is called ‘common-sense physics’. This sounds oxymoronic, but is appropriate when you consider that the inner workings of these deep learning-based models, though basic, also seem utterly mysterious.”

Yohei Igarashi, “The cliché writes back” at Aeon (September 9, 2021)

Igarashi goes on to argue that the basic concept is not even new. George Orwell (1903–1950) complained about automatic language in a famous essay, “Politics and the English Language” (1946), long before computers were widely used. Orwell meant, of course, propaganda and cliché.

He asks, “What would Orwell think of computer-generated texts, which gum together words precisely because they’ve been so ordered by others and repeated over and again?”

Actually, Orwell addressed that very topic in 1984. Julia, one of the central characters, has a job superintending the machine that churns out novels: “…she worked, as he had guessed, on the novel-writing machines in the Fiction Department. She enjoyed her work, which consisted chiefly in running and servicing a powerful but tricky electric motor… ” We can be fairly sure that nothing in these novels would give anyone new ideas.

More daringly, citing scholar Walter J. Ong (1912–2003), Igarashi contends that through most of history, most human language has been formulaic and cliché-ridden. Before writing and printing technology, he argues, “knowledge was stored and circulated through such oral formulas. Imagine if we had to navigate the world without the benefit of writing, using only informational mnemonics such as ‘Thirty days hath September,/April, June, and November…’”

The advent of printing, hence widespread literacy, enabled our forebears to store information offsite, so to speak and become more individually creative. Ong saw the Romantic movement of the 19th-century, which prized unique self-expression, as a natural outcome. In fact, Igarashi thinks, our modern desire for unique self-expression is a legacy of the Romantics. Uniqueness would have been much less prized in earlier times.

Igahashi is hopeful about the future of unique creativity in an age of machine writing, concluding,

These older technologies of writing – from handwriting to print – freed up the human mind from the burden of information storage so that we could be more creative. Likewise, today’s text technologies, which can generate serviceable writing, need not kill off the idea of human originality so much as reinvigorate it – a new Romanticism. One that can appropriate, manipulate, play with, make fun of, even reject whatever machine writing ends up being. And if human authors seem to have the last word, a bigger and better language model will inevitably come along and consume all that new writing. Then writers will innovate again, and on and on.

Yohei Igarashi, “The cliché writes back” at Aeon (September 9, 2021)

His fundamental point is that seamless machine writing is an outcome of the fact that most of what we actually write is highly predictable. Thus, our expressed thoughts may be easier to automate than we think. Put another way, if many people have said it before, the web crawler will find it.

Those who want innovation and personal creativity must be innovators and creators themselves. Uniqueness is the only thing that the machine can’t predict.


You may also wish to read: Can a computer write your paper for you someday soon? GPT-3 recently came up with a paragraph that—a pop psychologist agreed—sounded just like him. The trouble was, for every cobbled together paragraph that made sense, there was one that didn’t. And GPT-3 can’t help picking up bad ideas.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

English Prof: You’ll Get Used To Machine Writing — and Like It!