Amazon facial recognition wrongly labeled lawmakers as the police suspect, fueling racial prejudice is concerned

Amazon’s Rekognition face surveillance technology has incorrectly labeled 28 members of the Congress as well as the police, the suspects, the ACLU says.


Amazon’s Rekognition face surveillance technology has incorrectly labeled 28 members of the Congress as well as the police, the suspects, according to the ACLU research, which notes that nearly 40 percent of the legislators identified by the system are people of color.

In a blog post, Jacob Snow, technology and civil liberties attorney for the ACLU of Northern California, said that the false matches were made against a mugshot database. The matches were also disproportionately people of color, ” he said. These include six members of the Congressional Black Caucus, among them the civil rights legend Rep. John Lewis, D-Ga.

The ACLU notes that people of color make up only 20 percent of Congress.

Snow also pointed to a recent letter to Amazon CEO Jeff Bezos of the Congressional Black Caucus, where the members expressed their concern about the possible enforcement of the law use of Rekognition. “It is clear that communities of color are more heavy and aggressive to be checked than white communities,” the letter says. “This status quo results in an oversampling of data that is ever used as input for an analytical framework, the leveraging of artificial intelligence, may have an adverse effect on the results in that oversampled communities.”


The ACLU echoes the objections of the Congressional Black Caucus. “If law enforcement is the use of Amazon Rekognition, it is not difficult to imagine that a police officer getting a ‘match’ that indicates that a person is a previous concealed weapon arrest, the influence of the officer before a performance even begins,” Snow writes. “Or a person getting a knock on the door of the enforcement of the law, and the question of whether their house is searched, based on a wrong identification.”

The false matches were made against a mugshot database (ACLU)

However, Amazon Web Services questioned how the ACLU has its test. “We think that the results may be improved by following best practices on setting the confidence thresholds (this is the percentage chance that Rekognition found) used in the test,” said an AWS spokesperson in a statement by e-mail to Fox News. “While 80 percent confidence is an acceptable threshold for pictures of hot dogs, chairs, animals, or other social media use cases, it would not be appropriate for identifying individuals with a reasonable degree of certainty. With the use of facial recognition for law enforcement, we guide clients to a threshold of at least 95 percent or higher.”

The ACLU said it used the default settings that Amazon sets for Rekognition.

Amazon Web Services, told Fox News that Rekognition is designed to efficiently trawl through large volumes of image data. “It is worth noting that in the real world scenarios, Amazon, Rekognition is used almost exclusively to help narrow the field and allow people to quickly assess and consider options with the help of their judgment (and not fully able to take decisions autonomously), where it can help find lost children, the limit of human beings, or the prevention of crimes,” said the spokesman.


“We have seen that customers use of the image and video analysis capabilities of Amazon Rekognition in a manner materially benefit both the society (e.g. the prevention of human trafficking, inhibiting the exploitation of children, the reunification of missing children with their families, and build educational apps for children) and organizations (the improvement of the security by means of multi-factor authentication, find images more easily, or prevent the theft package,” said the spokesman. “We continue to be excited about how image and video analysis can be a driver for the good in the world, including in the public sector and the enforcement of the law.”

This is not the first time that facial recognition has led to controversy. Microsoft, for example, recently urged the government to regulate the technology, citing the potential for abuse.

In a blog post, Microsoft President Brad Smith warned that “without a thoughtful approach, the government can rely on flawed or biased technological approaches to decide who to detect, investigate or even arrested for a crime.”

Earlier this year researchers from MIT and Stanford University reported gender and skin-type bias in commercial artificial-intelligence systems.

Fear of threats, however, some schools are looking to facial recognition technology for added security.

Fox News’ Christopher Carbone and the Associated Press contributed to this article.

Follow James Rogers on Twitter @jamesjrogers

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular