Mind Matters Natural and Artificial Intelligence News and Analysis
finger-touching-phone-with-social-media-concept-and-dark-background-stockpack-adobe-stock.jpg
Finger touching phone with social media concept and dark background
Finger touching phone with social media concept and dark background

Why Do Some People Try To Poison Big Tech’s Data Well?

Some social media users confuse Big Tech about their interests so as to preserve privacy and rein in relentless marketing campaigns
Share
Facebook
Twitter
LinkedIn
Flipboard
Print
Email

Here’s an article on a theme you probably didn’t expect to read about in a top tier tech magazine: How to poison the data Big Tech collects about you. It’s certainly evidence of the growing discontent with Monopoly Power and Big Surveillance:

Now researchers at Northwestern University are suggesting new ways to redress this power imbalance by treating our collective data as a bargaining chip. Tech giants may have fancy algorithms at their disposal, but they are meaningless without enough of the right data to train on.

Karen Hao, “How to poison the data that Big Tech uses to surveil you” at Technology Review (March 5, 2021)
Social media concept. Social networking service. Video hosting website. Streaming video.

Researchers Nicholas Vincent and Hanlin Li presented a paper at the recent Association for Computing Machinery’s Fairness, Accountability, and Transparency conference, offering three ways to frustrate Big Tech’s endless surveillance: data strikes, data poisoning, and conscious data contribution.

Data strikes: Withholding or deleting data either via privacy tools or privacy laws.

Data poisoning: Making data useless: “For instance, someone who dislikes pop music might use an online music platform to play a playlist of pop music when they step away from their device with the intention of ‘tricking’ a recommender system into using their data to recommend pop music to similar pop-hating users.” (Vincent and Lee)

Conscious data contribution: “In CDC, instead of deleting, withholding, or poisoning data, people give their data to an organization that they support to increase market competition as a source of leverage.” (Vincent and Lee) Here’s an example of conscious data contribution:

There may have already been a few examples of this. In January, millions of users deleted their WhatsApp accounts and moved to competitors like Signal and Telegram after Facebook announced that it would begin sharing WhatsApp data with the rest of the company. The exodus caused Facebook to delay its policy changes.

Karen Hao, “How to poison the data that Big Tech uses to surveil you” at Technology Review (March 5, 2021)

Of the three strategies, the one most likely to stand out is data poisoning. Hao mentions Ad Nauseam, a free browser extension which, in its sponsors’ words “quietly clicks on every blocked ad, registering a visit on ad networks’ databases. As the collected data gathered shows an omnivorous click-stream, user tracking, targeting and surveillance become futile.” A paper sets out the strategy.

On March 23, 2016, data poisoners wrecked Microsoft’s chatbot Tay, we are told,

In less than 24 hours online, Microsoft’s chatbot had turned into a monster, spouting racist and sexist messages. But AI experts say that there were many ways this problem could have been avoided…

Bruce Wilcox, director of natural language strategy at Kore, said that Microsoft “made a classic mistake that has been seen before, and shouldn’t have been seen again when they built Tay.”

“You don’t just allow unfiltered information from random users on the internet to be regurgitated back,” Wilcox said. What it means, he said, is that “all of the trollers on the internet will be saying, ‘look, here’s an opportunity, let’s play with this’–and that’s exactly what happened.”

Hope Reese, “How the Microsoft Tay chatbot debacle could have been prevented with better AI” at TechRepublic

The solution offered at TechRepublic is “better filters.” But maybe a pause of recognition is in order. Microsoft was seeking to manipulate by creating a bot that would sound like a human being. And the trolls just added their own spin to the deception.

Microsoft can, of course, lock out the trolls so that hereafter, it is only Microsoft’s deception. Meantime, the firm was apologizing all over town for what happened. From ZDNet, priceless: “Microsoft predicted 2016 would be the year of the bot, but apparently it didn’t foresee that the internet would inevitably attempt to hijack it. As one user quipped: ‘Stop deleting the genocidal Tay tweets @Microsoft, let it serve as a reminder of the dangers of AI.’”

Here’s a typical reaction from the time:

However, most internet users are not trying to fix anyone, just to take shelter from the global adstorm complete with a giant vacuum sucking up their data, just as Tay sucked up and regurgitated material offered through social media.

Perhaps the critical problem going forward is that internet users tend not to see yet that we are really providing the labor that produces the wealth of the Big Tech companies:

There’s a prevailing assumption that companies like Google and Facebook are solely responsible for developing the A.I. systems that power their respective search engine or social media service, he said. People should realize, however, that these A.I. technologies derive their capabilities from the labor of users whose online behaviors help improve the software each day.

Jonathan Vanian and Jeremy Kahn, “Your data is a weapon that can help change corporate behavior” at Fortune (February 23, 2021)

We take the data grab for granted and maybe we shouldn’t. We can start calling more shots when we realize the implications of that.

Note: The term “data poisoning” also has a technical meaning in the data collection industry: “an adversarial attack that tries to manipulate the training dataset in order to control the prediction behavior of a trained model such that the model will label malicious examples into a desired classes (e.g., labeling spam e-mails as safe).”


You may also wish to read:

Does government watch us on social media? Yes… so does business. They may all be getting to know you way better than you feel comfortable with. (Denyse Simon)

and

Arizona fights back against Big Tech app store monopoly. North Dakota’s anti-monopoly legislation was defeated but Arizona’s passed. State legislatures are beginning to look critically at the way Google’s and Apple’s app stores enforce a very profitable monopoly.


Mind Matters News

Breaking and noteworthy news from the exciting world of natural and artificial intelligence at MindMatters.ai.

Why Do Some People Try To Poison Big Tech’s Data Well?