In February, Canada released the findings of a year-long investigation into the US-based facial recognition app, Clearview AI. The investigation declared Clearview’s actions illegal within Canada and ordered the company to cease operations within the country and to remove all Canadian citizens from its database.
“What Clearview does is mass surveillance, and it is illegal,” said Canadian Privacy Commissioner Daniel Therrien.
So what is Clearview AI? And why has it raised the ire of our neighbors to the North?
Clearview AI, Inc.
Clearview AI is a facial recognition company marketed primarily to law enforcement agencies. It boasts a database of over 3 billion facial images “scraped” from public sources online such as news articles and social media sites.
In other words, if your image can be pulled up via a Google search, your face is more than likely stored in Clearview’s database already. According to New York Times reporter Kashmir Hill, this “goes far beyond anything ever constructed by the United States government or Silicon Valley giants.”
The app was created by Hoan Ton-That, a New York-based app developer in his early thirties. Ton-That joined forces with a former aide to Rudy Giuliani, Richard Schwartz, in 2016. The two went in together on a facial recognition technology venture. Ton-That first began the development of this app without a target customer base. Eventually he landed on law enforcement.
“We believe law enforcement should have the very best cutting-edge technology available to help investigate and solve crimes,” says the website.
The general public knew little about the app until January 2020 when The New York Times published an exposé. Soon afterward, Canada’s Office of the Privacy Commissioner (in partnership with other offices) launched an investigation into Clearview and its activity within Canada.
According to the 2020 article, over 600 law enforcement agencies had used the app in the previous year. In a more recent article published this year, The New York Times reported that Clearview AI is “now used by over 2,400 U.S. law enforcement agencies.”
The agencies have hailed the app as a huge success. At both the federal and state level, it has been used to identify more criminals and solve more cases. Take, for example, this story from the 2020 exposé:
In February (2017), the Indiana State Police started experimenting with Clearview. They solved a case within 20 minutes of using the app. Two men had gotten into a fight in a park, and it ended when one shot the other in the stomach. A bystander recorded the crime on a phone, so the police had a still of the gunman’s face to run through Clearview’s app.
They immediately got a match: The man appeared in a video that someone had posted on social media, and his name was included in a caption on the video. “He did not have a driver’s license and hadn’t been arrested as an adult, so he wasn’t in government databases,” said Chuck Cohen, an Indiana State Police captain at the time.
The man was arrested and charged; Mr. Cohen said he probably wouldn’t have been identified without the ability to search social media for his face.Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It” (New York Times, 2020)
Clearview’s billions of images are “sourced from public-only web sources, including news media, mugshot websites, public social media, and other open sources.”
Those public social media platforms include Facebook, Twitter, Instagram, YouTube — even the popular mobile payment service Venmo. All of these sites have express prohibitions against this practice of “scraping.” Twitter goes a step further than the others and bans the use of its data for facial recognition.
“A lot of people are doing it,” says Ton-That of his company’s scraping practices. “Facebook knows.”
Google, Facebook, Twitter, YouTube, and LinkedIn have all issued cease-and-desist letters to Clearview, stating that the practice of “scraping” is in violation of the terms of service. Clearview contends that it has a First Amendment right to scrape.
Concerns and Dangers
Clearview’s system is untested. No one knows whether or not its ability to protect the data it collects and creates is secure. And even if it is secure, a question of whether it can be trusted remains.
Clearview has shrouded itself in secrecy, avoiding debate about its boundary-pushing technology. When I began looking into the company in November, its website was a bare page showing a nonexistent Manhattan address as its place of business. The company’s one employee listed on LinkedIn, a sales manager named “John Good,” turned out to be Mr. Ton-That, using a fake name. For a month, people affiliated with the company would not return my emails or phone calls.
While the company was dodging me, it was also monitoring me. At my request, a number of police officers had run my photo through the Clearview app. They soon received phone calls from company representatives asking if they were talking to the media — a sign that Clearview has the ability and, in this case, the appetite to monitor whom law enforcement is searching for.
… Because the police upload photos of people they’re trying to identify, Clearview possesses a growing database of individuals who have attracted attention from law enforcement. The company also has the ability to manipulate the results that the police see. After the company realized I was asking officers to run my photo through the app, my face was flagged by Clearview’s systems and for a while showed no matches. When asked about this, Mr. Ton-That laughed and called it a “software bug.”Kashmir Hill, “The Secretive Company That Might End Privacy as We Know It” (New York Times, 2020)
“It’s creepy what they’re doing,” says Stanford Law School privacy professor Al Gidari, “but there will be many more of these companies. There is no monopoly on math. Absent a very strong federal privacy law, we’re all screwed.”
In addition to a database that makes searching for faces as easy as searching for information on Google, Clearview is also equipped with programming that can be used by augmented-reality glasses. Glasses capable of facial recognition in real-time are already being used by Chinese law enforcement within the context of an authoritarian regime and a chilling social credit system.
Woodrow Hartzog, a professor of law and computer science at Boston’s Northeastern University, says that the “dams are breaking” in efforts to prevent industries from embracing this kind of technology. “I don’t see a future where we harness the benefits of facial recognition technology without the crippling abuse of the surveillance that comes with it. The only way to stop it is to ban it.”
Canada Bans Clearview
In Canada, 48 Clearview AI accounts had been created for law enforcement agencies, including the Royal Canadian Mounted Police. In the official Report of Findings released in February after a year-long joint investigation, Canada concluded that the company has violated Canada’s privacy laws by acquiring and using people’s personal data without consent.
“What Clearview does is mass surveillance, and it is illegal,” said Canadian Privacy Commissioner Daniel Therrien. He has accused Clearview of subjecting society to a perpetual “police lineup.”
Clearview maintains that it has broken no laws or policies. The company argues that since all information was collected from public websites, it’s no different from Google — a large search engine of already-public information.
According to Clearview lawyer Doug Mitchell, “Clearview AI only collects public information from the Internet which is explicitly permitted. Clearview AI is a search engine that collects public data just as much larger companies do, including Google, which is permitted to operate in Canada.”
The report recommends that Clearview obey three commands: (1) cease offering its services to Canadian clients; (2) cease collecting and using images of Canadian citizens; and (3) delete all images of any Canadian citizens from its database.
Clearview voluntarily withdrew its services from Canada last summer through the investigation process but the company has no plans to undergo the tedious (if not impossible) task of determining who among its billions of faces is Canadian for the purpose of removal.
According to the New York Times, Ton-That is “eager” to challenge Canada’s conclusions in court.
“This is a simple issue of public information and who has access to it and why,” said Ton-That. “We don’t want a world where it’s just Google and a few other tech companies accessing public information.”
Challenges to Clearview in the States
In the United States, a bill with bipartisan support was introduced in the Senate last week. Titled “The Fourth Amendment is Not for Sale Act”, the bill seeks to close “the legal loophole that allows data brokers to sell Americans’ personal information to law enforcement and intelligence agencies without any court oversight.”
The bill would require law enforcement agencies to “obtain a court order before accessing people’s personal information through third-party brokers.” If passed, Clearview AI would be vulnerable to the bill’s reach.
Senators Ron Wyden (D-OR), Rand Paul (R-KY), Patrick Leahy (D-VT), and Mike Lee (R-UT) are among the bill’s sponsors.
In addition, over 70 advocacy groups recently asked the Department of Homeland Security to stop using Clearview AI’s facial recognition software. Among those groups are the ACLU, Fight for the Future, OpenMedia, and Electronic Frontier Foundation.
“The undersigned organizations have serious concerns about the federal government’s use of facial recognition technology provided by private company Clearview AI,” reads the letter. “We request that the Department immediately stop using Clearview AI…”
Last year the ACLU sued Clearview AI under Illinois’ Biometric Information Privacy Act (BIPA), which requires companies that collect biometric information to first obtain permission. It appears that lawsuit is still underway.
According to The New York Times, Australia and the UK are following Canada’s example and launching their own investigations into Clearview AI.
Ton-That has no plans to release his technology to the public. “There’s always going to be a community of bad people who will misuse it,” he says. He maintains confidence that he is using this technology in the best possible way, and, despite arguments to the contrary, that his company is protected under the First Amendment’s free speech clause.