US police used Clearview AI almost one million times

The founder of Clearview, a facial recognition company, disclosed to the BBC that it has conducted approximately one million searches for US law enforcement. CEO Hoan Ton-That also stated that Clearview has amassed 30 billion images obtained without users’ consent from platforms such as Facebook.

The company has been fined multiple times in Europe and Australia for privacy violations. Critics have voiced concerns that Clearview’s use by law enforcement creates a perpetual police line-up for the public.

“Whenever they have a photo of a suspect, they will compare it to your face,” says Matthew Guaragilia from the Electronic Frontier Foundation says. “It’s far too invasive.”

While the one million searches claim has not been confirmed by police, Clearview has stated that this is the approximate number of searches conducted by US law enforcement using its software.

Miami Police, in a rare acknowledgment, confirmed to the BBC that it uses Clearview’s software to investigate all types of crimes.

Clearview’s facial recognition system allows law enforcement agencies to upload a photo of a face and search for matches in a database of billions of images it has accumulated. The software then offers links to where similar images can be found online.

Clearview is widely regarded as one of the most potent and accurate facial recognition companies globally.

Clearview AI is prohibited from selling its services to most US companies due to a privacy law violation case brought by the American Civil Liberties Union (ACLU) in Illinois.

However, there is an exception for law enforcement agencies, and Clearview AI founder Hoan Ton-That claims that hundreds of police departments across the United States use his software.

While some US cities such as Portland, San Francisco, and Seattle have banned the software, police do not generally disclose whether they use facial recognition technology. Law enforcement agencies have typically justified their use of facial recognition for serious or violent crimes.

Miami Police, in a rare interview on Clearview AI’s effectiveness, stated that they use the software to investigate every type of crime, including shoplifting and murder.

Assistant Chief of Police Armando Aguilar disclosed that his team used the system about 450 times annually, and that it had helped solve several murders.

Nonetheless, opponents of facial recognition technology maintain that there are almost no regulations governing its use by law enforcement.

Mr Aguilar says Miami police treats facial recognition like a tip. “We don’t make an arrest because an algorithm tells us to,” he says. “We either put that name in a photographic line-up or we go about solving the case through traditional means.”

Facial recognition technology has been linked to several cases of mistaken identity by the police, although the true extent is difficult to determine due to the lack of transparency around its use.

While Clearview CEO Hoan Ton-That claims there have been no cases of mistaken identity using their technology, he acknowledges that police have made wrongful arrests due to “poor policing.”

While Clearview cites research that shows near 100% accuracy, this is often based on mugshots, and the actual accuracy depends on the quality of the input image.

Civil rights advocates are calling for police forces to publicly disclose their use of Clearview and for independent experts to scrutinize the algorithm.

Criminal defense lawyer Kaitlin Jackson, based in New York, campaigns against the police’s use of facial recognition technology.

“I think the truth is that the idea that this is incredibly accurate is wishful thinking,” she says. “There is no way to know that when you’re using images in the wild like screengrabs from CCTV.”

However, Mr Ton-That told the BBC he does not want to testify in court to its accuracy.

“We don’t really want to be in court testifying about the accuracy of the algorithm… because the investigators, they’re using other methods to also verify it,” he says.

According to Mr Ton-That, he has recently provided Clearview’s system to defence lawyers in specific cases, as he believes both prosecutors and defenders should have equal access to the technology.

An instance where Clearview was used to locate a crucial witness resulted in charges being dropped against Andrew Conlyn from Fort Myers, Florida, last year.

In March 2017, Mr Conlyn was a passenger in a friend’s car that crashed into palm trees at high speed, killing the driver. A passer-by rescued Mr Conlyn from the wreckage but left without providing a statement.

Despite Mr Conlyn claiming to be the passenger, the police suspected he had been driving and charged him with vehicular homicide. Mr Conlyn’s lawyers had an image of the passer-by from police body camera footage.

Just before his trial, Mr Ton-That permitted Clearview to be used in the case.

“This AI popped him up in like, three to five seconds,” Mr Conlyn’s defence lawyer, Christopher O’Brien, told the BBC. “It was phenomenal.”

The witness, Vince Ramirez, made a statement that he had taken Mr Conlyn out of the passenger’s seat. Shortly after, the charges were dropped.

But even though there have been cases where Clearview is proven to have worked, some believe it comes at too high a price.

“Clearview is a private company that is making face prints of people based on their photos online without their consent,” says Mr Guaragilia.

“It’s a huge problem for civil liberties and civil rights, and it absolutely needs to be banned.”