Civil Libertarians Worry that Facial Recognition Tech Is Inaccurate, but Fears May Worsen After It’s Perfected

California Governor Gavin Newsom in September signed the Body Camera Accountability Act, a law prohibiting the use of facial recognition and other biometric tracking software in police body cameras. Following similar measures in San Francisco and Oakland, the law is the first statewide ban of its kind in the country.

Supporters of the bill viewed it as necessary to safeguard civil liberties because facial recognition technologies currently available to law enforcement produce high rates of false positives, meaning that innocent people are too often misidentified as criminal suspects. In addition, they say that facial recognition could result in racially biased policing.

So far, concerns about racial bias have been well justified. Studies have found that facial recognition technology is less accurate at identifying individuals with darker skin, especially women of color. A July test of a facial recognition algorithm used by Idemia — a France-based vendor to the U.S. Customs and Border Protection — found that black women were falsely identified 10 times more often than white women.

Another study tested how well Amazon’s “Rekognition” image analysis software could distinguish members of Congress from criminal suspects in a mugshot database and found it lacking. The American Civil Liberties Union concluded that the software misidentified 28 members of Congress, including a disproportionate number of Congressional Black Caucus members.

This finding, however, has not deterred Amazon from pitching Rekognition to federal agencies, nor has it slowed down government agencies contracting with facial recognition software vendors. Instead, Amazon retorted that the ACLU’s experiment was flawed because the software was set to the default 80 percent confidence level as compared to the 99 percent level that Amazon recommended for law enforcement. This response, however, was inadequate.

As the ACLU pointed out, Amazon does not ask users how they plan to use its facial recognition technology. If there are no guidelines regulating police use of the technology, there is nothing to prevent law enforcement from using a low threshold for success. Without safeguards for an imperfect technology, false positives are inevitable.

Skeptical of the trade-offs, municipalities across the country have been cautious of facial recognition technology, with some cities proposing outright bans. California lawmakers, however, consider the state’s new prohibition to be only a temporary measure until the technology becomes more reliable. The ban signed by Newsom is set to expire in 2023.

Assembly member Phil Ting, cosponsor of the bill, has said that the ban’s sunset clause was designed to allow legislators to reconsider the prohibition if the technology improves sufficiently – in other words, it becomes more accurate at identifying minorities. Despite the apparent failings of current facial recognition tech, the National Institute of Standards and Technology reported that the best systems improved by a factor of 25 between 2010 to 2018. At this rate, it seems that “nearly perfect” facial recognition software could be available in a year or two.

Technological improvements, however, would not ensure that the threat to personal liberty will be going away. To the contrary, the negative effects may become more problematic when the government becomes more effective in its surveillance capabilities. First, more accuracy means more data and more discriminate targeting.

When a government can flawlessly track individuals, the state becomes efficient at singling out its opponents. Given a track record of laws such as black codes, the Sedition Act, or the Japanese Internment Bill, the imperfections of surveillance can mitigate the unjust consequences of bad laws. A perfect surveillance state, however, would ensure a dystopian future for marginalized people.

Second, if surveillance technology becomes widely adopted by government agencies, law-abiding people may change their behavior out of a sense of self-consciousness or caution. This can have a serious chilling effect on freedom of expression and is a key reason why facial recognition poses a unique challenge to civil liberties.

Minority communities and dissident groups may be the first to feel the pressure. In 2018, the city of Berkeley, ground zero for the free-speech movement of the 1960s, used facial recognition technology to watch participants in an anti-Marxist protest. Once a bastion of the counterculture, Berkeley followed the growing trend of police departments employing facial recognition.

The contemporary Chinese surveillance state may provide a bleak glimpse of the West’s future. An estimated 40,000 facial recognition cameras watch over the region of Xinjiang, a historic epicenter for political secessionist movements and home to the Uyghur ethnic minority. The Uyghurs, a predominately Muslim population, have been involved in a prolonged civil conflict with the majority Han Chinese. Armed with facial recognition tech, the Chinese state creates detailed records on specific individuals, even recording their purchases with the goal of instilling fear and deterring critics of the Chinese Communist Party.

With the Chinese government reportedly detaining upwards of over a million people in Xinjiang in political “re-education camps”, the main worry is not about wrongfully implicating an individual in a crime they did not commit because of misidentification. Rather, the larger problem is that the government is able to discriminately identify individuals because it considers a group a threat to state interests.

This is not to say that there is never a place for innovative commercial facial recognition technology. However, if there are stunted safeguards, including poor transparency requirements, deficient collection standards, and non-existent data retention limits, the public can have no assurance that their civil liberties are protected when the government gets their hands on these tools.

Jonathan Hofer is a Research Associate at the Independent Institute. He has written extensively on both California and national public policy issues. He holds a BA in political science from the University of California, Berkeley. His research interests include privacy law, student privacy, local surveillance, and the impact of emerging technologies on civil liberties.
Beacon Posts by Jonathan Hofer | Full Biography and Publications
Comments
  • Catalyst
  • Beyond Homeless
  • MyGovCost.org
  • FDAReview.org
  • OnPower.org
  • elindependent.org