The celebrity worry about superintelligent AI taking over and getting rid of us humans distracts our attention from a real-world fact: Artificial intelligence (AI) maximizes the opportunities while crashing the costs of corporate and government surveillance. Both have grown massively in recent years, with predictable results. The surveillants don’t by any means want to get rid of us. They want to take over and run our lives, ostensibly for our own good but certainly for theirs. We’ll have plenty more to say on this later so for now, a few examples (in no way an exhaustive list):
Is Google, the world’s second largest company after Amazon, reading your mail? From the Wall Street Journal, “A year ago, Google’s Gmail said it stopped its own practice of scanning users’ inboxes to personalize ads. But it still allows outside app developers to scan inboxes, according to a Wall Street Journal report.” Would that include Amazon? Yes. And the others? Well, the app need only be “legitimate,” and technically viable. That’s a floor-level bar for a big company.
To judge from a recent Congressional hearing, Facebook seems too drunk with the power of “artificial intelligence” to notice some of the company’s missteps. From Sarah Jeong at The Verge:
Over the course of an accumulated 10 hours spread out over two days of hearings, Mark Zuckerberg dodged question after question by citing the power of artificial intelligence.” “Moderating hate speech? AI will fix it. Terrorist content and recruitment? AI again. Fake accounts? AI. Russian misinformation? AI. Racially discriminatory ads? AI. Security? AI.” “It’s not even entirely clear what Zuckerberg means by “AI” here. He repeatedly brought up how Facebook’s detection systems automatically take down 99 percent of “terrorist content” before any kind of flagging. In 2017, Facebook announced that it was “experimenting” with AI to detect language that “might be advocating for terrorism” — presumably a deep learning technique. It’s not clear that deep learning is actually part of Facebook’s automated system.” More.
Maybe you expected that. It’s the Internet, after all. But did you know that WalMart now scans patients’ prescription history for possible opioid abuse? As science writer Josh Bloom puts it,
It would seem that Walmart wants to know if you are taking Valium, which kills (on its own) approximately zero people per year, or Ritalin, but will cheerfully sell enormous quantities of alcohol, which is responsible for 88,000 deaths per year.
This is really awful. First, if you use an MME calculator it becomes quickly obvious that Walmart is not talking about addicts who are taking huge doses of opioids. But that doesn’t stop the company from treating people that way. And it doesn’t have to be much. More.
And the personalities behind these surveillance efforts are not advanced artificial entities but the usual suspects, armed with the usual good intentions. Incidentally, addiction counselors take it as a given that the addict must want to stop using; spying and sanctions, whether by family or software programs, simply erode trust and promote deception.
Ground Zero for AI surveillance is, of course, China. Columnist Jonah Goldberg wrote recently,“By 2020, the government will fully implement a “social credit score” system that will use artificial intelligence and facial recognition technology to monitor, reward and punish virtually every kind of activity based upon ideological criteria — chiefly, loyalty to the state.”
From the New York Times, we learn that China is using AI to police what people and companies are saying about it all over the world., if they have a connection to China.
But this is a country where, as we learn at the Verge, cars will now be fitted with mandatory chips, and no one believes that that is merely in order to study and remedy urban traffic jams:
James Andrew Lewis, a senior vice president at the Center for Strategic and International Studies, thinks it’s likely that the RFID system will become another one of these tools that the government uses to monitor citizens.
“The Chinese government has gone all out to create a real surveillance state. [There’s] social credit, and facial recognition, and internet and telecom monitoring,” he tells The Verge. “It’s part of this larger effort to create total information awareness in China for the government.” More.
The Chinese government keyboard has two keys: Control and Delete But in a world in which few things are certain, we can be sure of one thing, that citizens will find a way around that.
It can’t happen here, right? Okay, maybe that stuff can’t happen in North America, or not just now. But recently I tried to buy a summer skirt, casually viewing a couple of online shopping sites in Canada. For weeks afterward, almost every unrelated site I visited—originating wherever—featured ads for summer skirts from Canada… One thing the algorithm didn’t know was that I had already bought a (currently) untraceable skirt at the Value Village some days earlier.
Update: As of November 2, 2018, it has been snowing and I am still seeing ads for summer skirts from all over the world…
In this case, the only consequence was useless digital noise directed at me that would have been impossible in the past due to higher labor and materials costs. And yes, it is currently harmless. But that’s only because whether I buy a new skirt or keep wearing out the old one doesn’t matter much. The same omnipresent technology is available for any use its owners deem worthwhile. Based on current cultural tends, many in positions of influence will surely see AI as a way of addressing incorrect, divisive, or non-expert-approved opinion. Under the right conditions, the technologies will multiply like weeds. Stay tuned.