Police Say a Simple Warning Will Prevent Face Recognition Wrongful Arrests. That's Just Not True.

Even when police heed warnings to take additional investigative steps, they exacerbate the unreliability of face recognition results.

Nathan Freed Wessler, Deputy Director, ACLU Speech, Privacy, and Technology Project

Face recognition technology in the hands of police is dangerous. Police departments across the country frequently use the technology to try to identify images of unknown suspects by comparing them to large photo databases, but it often fails to generate a correct match. And numerous studies have shown that face recognition technology misidentifies Black people and other people of color at higher rates than white people. To date, there have been at least seven wrongful arrests we know of in the United States due to police reliance on incorrect face recognition results — and those are just the known cases. In nearly every one of those instances, the person wrongfully arrested was Black.

Supporters of police using face recognition technology often portray these failures as unfortunate mistakes that are unlikely to recur. Yet, they keep coming. Last year, six Detroit police officers showed up at the doorstep of an eight-months pregnant woman and wrongfully arrested her in front of her children for a carjacking that she could not plausibly have committed. A month later, the prosecutor dismissed the case against her.

Police departments should be doing everything in their power to avoid wrongful arrests, which can turn people’s lives upside down and result in loss of work, inability to care for children, and other harmful consequences. So, what’s behind these repeated failures? As the ACLU explained in a recent submission to the federal government, there are multiple ways in which police use of face recognition technology goes wrong. Perhaps most glaring is that the most widely adopted police policy designed to avoid false arrests in this context simply does not work. Records from the wrongful arrest cases demonstrate why.

It has become standard practice among police departments and companies making this technology to warn officers that a result from a face recognition search does not constitute a positive identification of a suspect, and that additional investigation is necessary to develop the probable cause needed to obtain an arrest warrant. For example, the International Association of Chiefs of Police cautions that a face recognition search result is “a strong clue, and nothing more, which must then be corroborated against other facts and investigative findings before a person can be determined to be the subject whose identity is being sought.” The Detroit Police Department’s face recognition technology policy adopted in September 2019 similarly states that a face recognition search result is only an “an investigative lead and IS NOT TO BE CONSIDERED A POSITIVE IDENTIFICATION OF ANY SUBJECT. Any possible connection or involvement of any subject to the investigation must be determined through further investigation and investigative resources.”

Police departments across the country, from Los Angeles County to the Indiana State Police, to the U.S. Department of Homeland Security, provide similar warnings. However ubiquitous, these warnings have failed to prevent harm.

We’ve seen police treat the face recognition result as a positive identification, ignoring or not understanding the warnings that face recognition technology is simply not reliable enough to provide a positive identification.

In Louisiana, for example, police relied solely on an incorrect face recognition search result from Clearview AI as purported probable cause for an arrest warrant. The officers did this even though the law enforcement agency signed a contract with the face recognition company acknowledging officers “must conduct further research in order to verify identities or other data generated by the [Clearview] system.” That overreliance led to Randal Quran Reid, a Georgia resident who had never even been to Louisiana, being wrongfully arrested for a crime he couldn’t have committed and held for nearly a week in jail.

In an Indiana investigation, police similarly obtained an arrest warrant based only upon an assertion that the detective “viewed the footage and utilized the Clearview AI software to positively identify the female suspect.” No additional confirmatory investigation was conducted.

But even when police do conduct additional investigative steps, those steps often exacerbate and compound the unreliability of face recognition searches. This is a particular problem when police move directly from a facial recognition result to a witness identification procedure, such as a photographic lineup.

Face recognition technology is designed to generate a list of faces that are similar to the suspect’s image, but often will not actually be a match. When police think they have a match, they frequently ask a witness who saw the suspect to view a photo lineup consisting of the image derived from the face recognition search, plus five “filler” photos of other people. Photo lineups have long been known to carry a high risk of misidentification. The addition of face recognition-generated images only makes it worse. Because the face recognition-generated image is likely to appear more similar to the suspect than the filler photos, there is a heightened chance that a witness will mistakenly choose that image out of the lineup, even though it is not a true match.

This problem has contributed to known cases of wrongful arrests, including the arrests of Porcha Woodruff, Michael Oliver, and Robert Williams by Detroit police (the ACLU represents Mr. Williams in a wrongful arrest lawsuit). In these cases, police obtained an arrest warrant based solely on the combination of a false match from face recognition technology; and a false identification from a witness viewing a photo lineup that was constructed around the face recognition lead and five filler photos. Each of the witnesses chose the face recognition-derived false match, instead of deciding that the suspect did not, in fact, appear in the lineup.

A lawsuit filed earlier this year in Texas alleges that a similar series of failures led to the wrongful arrest of Harvey Eugene Murphy Jr. by Houston police. And in New Jersey, police wrongfully arrested Nijeer Parks in 2019 after face recognition technology incorrectly flagged him as a likely match to a shoplifting suspect. An officer who had seen the suspect (before he fled) viewed the face recognition result, and said he thought it matched his memory of the suspect’s face.

After the Detroit Police Department’s third wrongful arrest from face recognition technology became public last year, Detroit’s chief of police acknowledged the problem of erroneous face recognition results tainting subsequent witness identifications. He explained that by moving straight from face recognition result to lineup, “it is possible to taint the photo lineup by presenting a person who looks most like the suspect” but is not in fact the suspect. The Department’s policy, merely telling police that they should conduct “further investigation,” had not stopped police from engaging in this bad practice.

Because police have repeatedly proved unable or unwilling to follow face recognition searches with adequate independent investigation, police access to the technology must be strictly curtailed — and the best way to do this is through strong bans. More than 20 jurisdictions across the country, from Boston, to Pittsburgh, to San Francisco, have done just that, barring police from using this dangerous technology.

Boilerplate warnings have proven ineffective. Whether these warnings fail because of human cognitive bias toward trusting computer outputs, poor police training, incentives to quickly close cases, implicit racism, lack of consequences, the fallibility of witness identifications, or other factors, we don’t know. But if the experience of known wrongful arrests teaches us anything, it is that such warnings are woefully inadequate to protect against abuse.