Mind Matters Natural and Artificial Intelligence News and Analysis
Chatbot / Social Bot mit Quellcode im Hintergrund
Patrick Daxenbichler Licensed Adobe Stock

Should AI-Written News Stories Have Bylines? Whose?

Like it or not, AI is here to stay. So, how do we make the best use of it in writing?
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Consider OpenAI’s GPT-2 text generation AI. OpenAI claims GPT-2 can create “coherent paragraphs of text” (though what we’ve seen stretches the meaning of “coherent”).

It also raises a question: If a writer uses AI to “write” an article, or if an article is written entirely by an AI system, what should be the byline?

When I read a piece created by another human, I am engaging with another mind. When I read a piece “authored” by an AI, however, I’m engaging an algorithm. The human may ponder, evaluate, weigh, and rewrite. (I sure do.) An AI spews: Given this input, under these conditions, this is the output. Period.

AI can be useful in writing. Consider an AI working from a mass of field reports and “writing” an overview of very complex events. Or a health writer who must summarize a huge batch of academic papers to determine what coherent message can be gleaned from them about a controversial issue in cancer treatment or vaccination. These uses of machine analysis benefit us by augmenting what we can do — like every other tool we use. There will always be problems with dishonest uses such as AI essays, deepfakes, or other artifacts that are meant to deceive. But honest practice also raises some issues we need to think about:

At the moment, as Prof Rebecca Crootof points out in an illuminating analysis on the Lawfare blog, there is no agreement about AI researchers’ publication obligations. And of all the proliferating “ethical” AI guidelines, only a few entities explicitly acknowledge that there may be times when limited release is appropriate. At the moment, the law has little to say about any of this – so we’re currently at the same stage as we were when governments first started thinking about regulating medicinal drugs.

John Naughton, “AI is making literary leaps—now we need the rules to catch up” at The Guardian

We’ve suggested before that the government agencies whose job is ensuring public safety, such as the NHTSA, should do just that. Rather than, for example, ceding to self-motivated technology companies safety assessments, they should develop tests and guidelines to determine if the AI in our midst is safe—at least as safe as the humans it augments or replaces.

We should insist that media do likewise: Pieces “authored” by an AI should state so clearly in the byline. It’s even possible to embed invisible watermarks in AI-generated or augmented video that securely denote “authorship.” Mainstream journalism today is struggling with loss of audience because its critical former function of simply conveying facts has been rendered obsolete by the internet. If such media start using AI without taking simple steps like this, they will drown in their own irrelevance.

Though, as much I hope for that, I remain skeptical. Tech companies seem bent on pushing ethical boundaries And many media outlets (feeling their own encroaching irrelevance) get tech “google”-eyes and too often fail to ask the hard questions about what they are automating, how, and why.

So, what do we do? I suggest a couple of things.

First, we need more skepticism. In finance, there’s an adage: “If an investment looks too good to be true, it probably is.” Lunches are not free. We must treat videos, images, and articles that all too easily confirm, or disconfirm, our beliefs as suspect. They might be true, but they just as easily could be false. We should treat them like gossip. That is, we should not share them unless we know, without reasonable doubt, one way or the other that they are well verified.

Second, we should support — by subscribing and sharing — those media outlets that have clear bylines and dig deep into their sources (vs. re-working what they’ve scraped together off the web). As long as I know the source, I can make up my mind about the content.

AI is not going to go gently into any kind of good night. Our best response is, well, to be human. Let’s use our minds and demand the same from others.


More from Brendan Dixon: on AI and the arts:

Can predictive text replace writers? A New Yorker staff writer ponders his future and the machine’s (Brendan Dixon)

Fan tries programming AI jazz, gets lots and lots of AI… Jazz is spontaneous, but spontaneous noise is not jazz

AI can’t do jazz because spontaneity is at jazz’s core. AI “artists”—in all the forms presently available — merely replay their programming.

Could AI authentically create anything?

AI creates kitsch, not art

and

The underwhelming creativity of AI


Brendan Dixon

Fellow, Walter Bradley Center for Natural & Artificial Intelligence
Brendan Dixon is a Software Architect with experience designing, creating, and managing projects of all sizes. His first foray into Artificial Intelligence was in the 1980s when he built an Expert System to assist in the diagnosis of software problems at IBM. Since then, he’s worked both as a Principal Engineer and Development Manager for industry leaders, such as Microsoft and Amazon, and numerous start-ups. While he spent most of that time other types of software, he’s remained engaged and interested in Artificial Intelligence.

Should AI-Written News Stories Have Bylines? Whose?