Opposition is growing in the Western world to routine government use of facial recognition (FR) technologies. But it takes different forms in different places. According to a draft paper, the European Commission (of the European Union), for example, wants a temporary ban of maybe five years to “give researchers and policymakers time to study the technology and figure out how best to regulate it.” (Technology Review)
Technology Review supports the ban:
Is a temporary ban a good idea? Yes, especially given the breakneck pace at which the technology is being deployed in Europe, by everyone from police forces to supermarkets. Recently, both France and Sweden stopped schools from installing facial recognition on their grounds. And taking the time to assess the impacts of the technology is safer than undoing what has already been done.TechPolicy, “The EU might ban facial recognition in public for five years” at Technology Review
Last year, San Francisco banned facial recognition technology by police and city agencies. Some hope it’s only a temporary ban:
Daniel Castro, vice president of the industry-backed Technology and Innovation Foundation, also says the San Francisco ordinance is a poor model for other U.S. cities.
“They’re saying, let’s basically ban the technology across the board, and that’s what seems extreme, because there are many uses of the technology that are perfectly appropriate,” Castro told NPR. “We want to use the technology to find missing elderly adults. We want to use it to fight sex trafficking. We want to use it to quickly identify a suspect in case of a terrorist attack. These are very reasonable uses of the technology, and so to ban it wholesale is a very extreme reaction to a technology that many people are just now beginning to understand.”Shannon Van Sant, “San Francisco Approves Ban On Government’s Use Of Facial Recognition Technology” at NPR
Facial recognition technology is doubtless useful in the situations Castro identifies but it tends to misidentify members of minority groups: “Facial recognition software is particularly bad at recognizing African Americans and other ethnic minorities, women, and young people, often misidentifying or failing to identify them, disparately impacting certain groups.” (Electronic Frontier Foundation)
Minority groups were poorly represented, however unintentionally, in the masses of data gathered for machine learning over the years but they have now begun to resist suffering from the results of such industry errors.
A broad national campaign is also forming against facial recognition on university campuses, even though FR has not made much headway there in the past:
The national campaign targets students of diverse political persuasions. For instance, it suggests forming coalitions with college groups ranging from the far-left Young Democratic Socialists of America to the libertarian/conservative Young Americans for Liberty. “There is distrust in the system and its sensibility to act in the best interests of society across the spectrum,” [Erika] Darragh says. “So [opposition to] automating law enforcement via facial recognition is something that a lot of people can come together on.”Sean Captain, “‘This is a racial justice issue’: Students organize to stop facial recognition on campus” at Fast Company
Some in the tech industry have also expressed qualms about helping to create the total surveillance society, a recent example being Microsoft:
Company president Brad Smith has revealed that the tech giant recently turned down a request from law enforcement to equip officers’ cars and body cameras with face recognition tech. The California department apparently wanted to run a scan every time an officer pulls anyone over.
Smith said Microsoft rejected the contract due to human rights concerns — it believes the technology’s use for that particular purpose could lead a disproportionately large number of women and minorities being held for questioning.Mariella Moon, “Microsoft didn’t want to sell its facial recognition tech to California police” at Engadget (April 7, 2019)
However, Microsoft has a different approach to doing business in China:
Last week, the Financial Times reported that Microsoft Research Asia worked with a university associated with the Chinese military on facial recognition tech that is being used to monitor the nation’s population of Uighur Muslims. Up to 500,000 members of the group, primarily in western China, were monitored over the course of a month, according to a New York Times report…
Just as people often mention The Terminator in reference to autonomous weaponry worst-case scenarios, [Microsoft President Brad] Smith repeatedly invokes 1984 in reference to surveillance state fears. But it’s tough to reconcile how Microsoft is at once in favor of protecting human rights in California while being complicit in violations in China. Likewise, it’s hard to square how Microsoft insists that facial recognition systems be fair but opposes a moratorium that makes fairness an obligation before deployment.Khari Johnson, “Microsoft’s confusing facial recognition policy, from China to California” at VentureBeat (April 18, 2019)
Microsoft cannot be ignorant of the fact that facial recognition aids the government of China in identifying members of religious and racial minorities that are subject that are to persecution. Or that dissent from such policies is not permitted there, as it is in the United States.
In Hong Kong, where Beijing seeks to impose its authority well before 2047 as agreed, pro-democracy protestors, accustomed to a level of freedom more like that of the United States, use umbrellas, face masks, and laser pointers to deter the recognition technology.
Perhaps some tech moguls gamble that they can install a disruptive technology—without much discussion of the purpose, implications, or foreseeable results—and not be held morally accountable. The historical record for getting away scot free in such cases is mixed.
See also: How to fool face recognition Changing a couple of pixels here and there can stump a computer