News

The U.S. government study finds a racial bias in facial recognition tools

(Ap) – Many of the systems for facial recognition misidentify people of color more often than white people, according to a U.S. government study released on Thursday, which is likely to be at the end of the technology on a large scale to be used by law enforcement agencies.

FILE PHOTO: People walk past a poster, simulation of facial recognition software for Security in China 2018 exhibit to the public security, Beijing, China, October 24, 2018. REUTERS/Thomas Peter

The study by the National Institute of Standards and Technology (NIST), that is, when the execution of a specific type of database search is known as a “one-to-one matching, many face recognition algorithms will mistakenly identify African-American and Asian faces, is 10 to 100 times greater than that of the Caucasian faces.

The study also found that African-American women are more likely to be misidentified in a one-to-many” matching, which can be used to identify a person of interest in a criminal investigation.

While some companies have played down the previous findings of a bias in the technology that is capable of guessing a person’s gender, which is known as “facial analysis,” the NIST study is the evidence that face matching was struggling with over population, too.

Joy Buolamwini, founder of the Algorithmic and Justice League are referred to in the report as “a detailed rebuttal” of the people, that is to say, an artificial intelligence (AI), bias is no longer an issue. The study comes at a time of growing dissatisfaction with the technology in the United States, with critics warning it may result in unjust harassment or arrest.

The report from the NIST-tested 189 of the algorithm, the 99-party developers, with the exception of companies such as Amazon.com Inc. (AMZN.D) was not intended to be a review. What is being tested is different from what the companies sell, and that NIST studied the algorithms are independent of the cloud, and the training data.

China ‘ s SenseTime, an AI starting with a value of more than $7.5 billion, it had a “high false match rates for all the comparisons in one of the NIST tests, the report said.

SenseTime is the algorithm yielded a false-positive for more than 10% of the time, the search for photos of Somali men, which, if employed in an airport, this would mean that a Somali man was able to pass through a customs check, one out of 10 times, he used the passport of the other Somali men.

SenseTime not immediately return a request for comment.

Yitu, one of the other AI starting from China, to be more accurate, and I did not have a racially skewed.

Microsoft Corp’s (MSFT.D) were almost 10 times more false-positive rate for women of color, men of color, and in some cases, a one-to-many to the test. The algorithm proved to be much of a difference in a one-to-one, many-to test with pictures of black and white men.

Microsoft said it was reviewing the report and will not comment on Thursday night.

U.s. congressman Bennie Thompson, chairman of the U.S. House Committee on Homeland Security, said that the findings of the bias was even worse than feared, at a time when the customs officials, with the addition of face detection technology to make travel easier.

“The government needs to re-schedule for the face-tracking technology, in the light of these startling results,” he said.

Report by john Wolfe, and Jeffrey Dastin; Additional reporting by Yingzhi Yang in Beijing; Editing by Andy Sullivan and Leslie Adler

Follow us

Don't be shy, get in touch. We love meeting interesting people and making new friends.

Most popular