Mind Matters Natural and Artificial Intelligence News and Analysis
Würfel Fact oder Fake
Fake news 2 Adobe Stock licensed

Do Bots Spreading False News Really Threaten Democracy?

Researchers found that humans spread more false news than bots
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Social scientists have been complaining recently that bots—algorithms on social media that are designed to behave like actual social media users—are frustrating their research:

“Bots are designed to behave online like people,” says Jon-Patrick Allem, a social scientist at the University of Southern California in Los Angeles. “If a researcher is interested in describing public attitudes, you have to be sure that the data you’re collecting on social media is actually from people.”

Heidi Ledford, “Social scientists battle bots to glean insights from online chatter” at Nature

Bots are currently blamed, we are told, for attempts to sway the US 2016 election and for promoting e-cigarettes and cannabis for health online. Many researchers don’t even try to filter bots out:

“You might be artificially giving the bots a voice by treating them as if they are really part of the discussion, when they are actually just amplifying something that may not be voiced by the community,” she says. In her case, she notes, failing to weed out bots could lead her to conclude that people are generating more or different anti-vaccination chatter than they actually are.

Heidi Ledford, “Social scientists battle bots to glean insights from online chatter” at Nature

A 2015 article emphasized how easy bots are to construct:

All you need is IFTTT.com account, along with an RSS feed and maybe $10USD for 1,000 fake friends – all of them bots… Or better yet, download your very own bot software to wreak havoc on social networks in the comfort of your own home, within mere minutes. Much of this is even freeware (check out GitHub, for example). If you need a bot that can actually do conversations with you or others and pretend to be a human, you might check the Gonzales tutorial for code.

Lutz Finger, “Do Evil – The Business Of Social Media Bots” at Forbes

Finger’s analysis suggests that bots succeed in influencing opinion because social media is such a limited sphere compared to real life that we can have a hard time determining whether there is a real entity behind the message. But one thing they do, according to 2018 research on the 2017 Catalan Referendum, is increase the inflamatory content of social media:

We provide evidence that social bots target mainly human influencers but generate semantic content depending on the polarized stance of their targets. During the 2017 Catalan referendum, used as a case study, social bots generated and promoted violent content aimed at Independentists, ultimately exacerbating social conflict online.

Massimo Stella, Emilio Ferrara, and Manlio De Domenico, “Bots increase exposure to negative and inflammatory content in online social systems” at PNAS (open access)

Some commentators believe that social media bots damage democracy as a result:

A modest network of coordinating bot accounts on Twitter can massively expand the size and scope of attention a tweet receives, influence the course of a thread, and either mitigate or multiply the impact of a media event. An April 2018 study by the Pew Research Center estimates that between 9 percent and 15 percent of all Twitter accounts are automated. What’s more, 66 percent of all tweeted links to popular sites were disseminated by bot accounts, though a staggering 89 percent of links to news-aggregation sites were bot sourced.

Andrew Tarantola, “Social media bots are damaging our democracy” at Engadget

But wait. A group of researchers found that people spread false news faster than bots do:

We generally think that bots distort the types of information that reaches the public, but—in this study at least—they don’t seem to be skewing the headlines toward false news, he notes. They propagated true and false news roughly equally. …

He and his colleagues collected 12 years of data from Twitter, starting from the social media platform’s inception in 2006 … They found that whereas the truth rarely reached more than 1000 Twitter users, the most pernicious false news stories—like the Mayweather tale—routinely reached well over 10,000 people. False news propagated faster and wider for all forms of news—but the problem was particularly evident for political news, the team reports today in Science.

At first the researchers thought that bots might be responsible, so they used sophisticated bot-detection technology to remove social media shares generated by bots. But the results didn’t change: False news still spread at roughly the same rate and to the same number of people. By default, that meant that human beings were responsible for the virality of false news.

Katie Langin, “Fake news spreads faster than true news on Twitter—thanks to people, not bots” at Science

Not only that but the false news was not spreading chiefly from accounts with huge numbers of followers; the accounts purveying it tended to have fewer followers but more novel messages. The researchers determined that novelty plays a role in the spread of false news. (The paper is open access.)

That makes sense. For example, “The dental association recommends brushing after every meal” would be less likely to go viral than “New research shows that toothpaste causes cancer.”

The fact that humans outdo bots in spreading false news creates a huge practical problem for would-be reformers. If they want to rub out false news, banning bots from social media would be less effective than banning people. Maybe public shaming of frequent false news purveyors would work better.


Further reading on social media:

Does democracy demand a war on Twitterbots?

Social media censorship?: Governments weigh the options. The United States may be going in the opposite direction from other Western countries.

Facebook Moderators Are Not Who We Think. Companies offer terrible working conditions partly because they think AI will just take over soon And if that doesn’t—and perhaps can’t—happen, what’s the backup plan? Lawsuits?

Facebook’s secret censorship rules expose a key problem: Most moderators are not skilled and have only a few seconds to decide on a post

and

No, Twitter is not the New Awful It’s the Old Awful back for more. It’s the Town Without Pity we all tried to get away from


Denyse O'Leary

Denyse O'Leary is a freelance journalist based in Victoria, Canada. Specializing in faith and science issues, she is co-author, with neuroscientist Mario Beauregard, of The Spiritual Brain: A Neuroscientist's Case for the Existence of the Soul; and with neurosurgeon Michael Egnor of the forthcoming The Human Soul: What Neuroscience Shows Us about the Brain, the Mind, and the Difference Between the Two (Worthy, 2025). She received her degree in honors English language and literature.

Do Bots Spreading False News Really Threaten Democracy?