News and HeadlinesNational News

Actions

How police are using face recognition to catch criminals; experts say that's bad for free speech

Posted

Imagine this scenario:

Police enter a football stadium and scan the crowd, searching for a man they believe is a terrorist.

To find him, they upload a photo into a video surveillance network using face recognition technology. The network of cameras scan the stadium until an alert pops up: the algorithm found a match with someone in the crowd.

To everyone else, he looks like a devoted family man without a criminal record, but thanks to the algorithm, police know better. They rush into the crowd and take him into custody.

There’s only one problem: the man they arrested isn’t the terrorist they were looking for at all. He just resembles the suspect — a similar skin tone, a similar face shape. The software misidentified him, and their intended target is still on the loose.

Sound like science fiction?

It's not. In fact, face recognition technology is being increasingly utilized by FBI, as well as state and local law enforcement across the country, and there have already been mistakes leading to the arrest of innocent people.

Sixteen states let the FBI use face recognition technology to compare the faces of suspected criminals to their driver’s license and ID photos, according to the Government Accountability Office.

At least 1 out of 4 state or local police departments has the option to run face recognition through their or another agency’s system. At least 26 states allow law enforcement to run or request searches against their databases of driver's license and ID photos, according to a 2016 report by the Center on Privacy and Technology at Georgetown Law.

In Utah, the FBI can search 5.2 million driver's license photos and mug shots.

Roughly 50 percent of American adults have their photos searched this way — meaning that 117 million adults are included in law enforcement face recognition networks, according to the report.

Law enforcement officials say using face recognition makes it easier and more efficient to catch criminals and keep the country safe from terrorism.

But critics question the effectiveness of law enforcement using a software which is known for problems with accuracy — especially in detecting facial differences between women, young people and people of color.

A 2018 MIT study found that when the person in the photo is a white man, face recognition software is right 99 percent of the time, but the darker the skin, the more errors arise — up to nearly 35 percent for images of darker-skinned women.

In July, the American Civil Liberties Union tested “Rekognition,” Amazon’s facial recognition system. The ACLU scanned the faces of all 535 members of Congress against 25,000 public mug shots. The software incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime.

The false matches were disproportionately people of color, including six members of the Congressional Black Caucus. On the basis of the test, the ACLU urged Congress to join them in calling for a moratorium on law enforcement use of face surveillance.

At a time when artificial intelligence is becoming increasingly used as a tool to keep us safe, experts are beginning to sound the alarm that for all the benefits this technology provides, it could have serious implications for privacy, free speech and racial and gender bias.

Some of the strongest advocates for increased government oversight in the use of face recognition by law enforcement have come from Utah, including Sen. Mike Lee and former Utah Rep. Jason Chaffetz.

"Like many technologies, used in the wrong hands or without appropriate parameters, it is ripe for abuse. It would be one thing if facial recognition technology were perfect or near perfect, but it clearly is not. Facial recognition technology does make mistakes,” said Chaffetz during a congressional hearing on facial recognition databases in 2017.

Criticism has even come from within the private companies that develop the technology. Brian Brackeen, CEO of software company Kairos, recently made headlines by saying the face recognition his firm develops is not yet ready for the burden of upholding the law.

“You’re seeing a new wave of skepticism about whether face surveillance will actually improve public safety, and whether or not we should be engaging with this system at all,” says Matt Cagle, technology and civil liberties attorney at the ACLU of Northern California.

'Chilling effect' on protest

Face recognition technology has been around for the last decade, but concerns about privacy and civil liberties have only recently received national attention, says Cagle. He says what has changed is that computer power has increased and the cost of storage has gone down, allowing face recognition technology to be more powerful and easier for government agencies to use.

And while face recognition software is widely used by law enforcement as an investigative tool, there is a “complete absence of law in this space,” according to Clare Garvie, an associate with the Center on Privacy and Technology at Georgetown Law.

Other police investigative tools — such as wiretaps and fingerprints — are heavily regulated, requiring a judge to issue a warrant for use. But only a handful of state laws touch on face recognition, with no comprehensive legislation governing its use, according to Garvie.

The awareness that video cameras could not just be used to surveil people but to identify who is doing what at any given point could have a “chilling effect” on protest, says Garvie. People may be less likely to march in the streets for an unpopular cause if they believe that cameras are tracking who they are seen with and where they go.

Garvie compares the presence of police video cameras enabled with face recognition technology on every corner to the act of a police officer walking through a protest demanding to see the identification of everyone at the protest.

“We would be horrified and perfectly justified in not complying absent of reasonable suspicion that we're doing something wrong,” she says. But she says face recognition cameras do the same thing — while capturing faces and identifying people remotely and in secret.

“The impact this has on First Amendment activity are incredibly troubling and should not be understated,” she says.

Potential for racial bias

Other experts raise concerns about how face recognition technology can perpetuate or even exacerbate racial bias in law enforcement.

Experts say there’s no way to know exactly why the algorithms have more trouble identifying certain demographics, including younger people, women and people of color.

Alvaro Bedoya, founding executive director of the Center on Privacy and Technology at Georgetown Law and former chief counsel to the Senate Subcommittee on Privacy, says it's partly rooted in the fact that the software is often trained predominantly on white faces — meaning that software developers use data sets that contain more white faces in them as they teach the algorithm to compare and recognize different faces.

At an even deeper level, Bedoya says “film and photography was developed in a way that was optimized for white faces.” He says that’s why often pictures of individuals with darker skin are more likely to be blurred or badly lit, because the technology itself was developed to use light to best capture white individuals.

Experts say one of their concerns for racial bias as facial recognition technology is the potential use of real-time facial recognition technology in police body cameras.

Garvie says equipping police body cameras with face recognition technology would allow an officer walking down a crowded street to scan the passers-by and receive an alert indicating that he just passed someone wanted for aggravated assault.

“A reasonable officer in that position is going to draw his weapon and perhaps engage lethal force,” says Garvie, “but the system might have gotten it completely wrong.”

Though this isn’t happening yet, it’s on the horizon, with over half of law enforcement agencies across the country using Axon body cameras — to which face recognition capability could be added, according to Cagle.

Cagle says there is something ironic and deeply troubling about the use of face recognition in police body cameras — which were originally intended to hold officers accountable and to give the public increased transparency about police interactions with the public.

He says the use of face recognition technology transforms the role of the body camera as a way to watch not the police themselves but to track and target the public.

"Face surveillance should never be used with body cameras. It turns a tool for government accountability into a tool for government spying," says Cagle.

Potential solutions?

In the 2016 report “The Perpetual Line-up,” the Center on Privacy and Technology at Georgetown Law advocated for increased oversight of law enforcement use of face recognition technology, including the requirement that the FBI or police have at least a reasonable suspicion of criminal conduct prior to initiating a face recognition search.

Bedoya says the technology should never be allowed on body cameras, and should not be used by law enforcement until it can be proven to be accurate and not biased against certain demographics.

But he adds that it could be of help in a true national emergency, the caveat being that a national emergency should be narrowly defined — and that all other uses should require a warrant signed by a judge.

But others, including the ACLU, have called for a complete moratorium on the use of face recognition technology by law enforcement.

"We think even accurate face surveillance poses a profound threat to civil liberties and is easily abused by law enforcement to further target people of color, silence activists, and violate the human rights of immigrants," says Cagle. "We can stop this now before it gets out of control."