Face recognition researcher fights Amazon over biased AI

AP  |  Cambridge(US) 

technology was already seeping into everyday life from your photos on to police scans of mugshots when noticed a serious glitch: Some of the couldn't detect dark-skinned faces like hers.

Her tests on created by brand-name tech firms such as uncovered much higher error rates in classifying the gender of darker-skinned women than for lighter-skinned men.

Along the way, Buolamwini has spurred and to improve their systems and irked Amazon, which publicly attacked her research methods.

On Wednesday, a group of AI scholars, including a winner of computer science's top prize, launched a spirited defense of her work and called on to stop selling its to police.

Her work has also caught the attention of political leaders in statehouses and and led some to seek limits on the use of to analyze human faces.

"There needs to be a choice," said Buolamwini, a graduate student and at "Right now, what's happening is these technologies are being deployed widely without oversight, oftentimes covertly, so that by the time we wake up, it's almost too late."

Buolamwini is hardly alone in expressing caution about the fast-moving adoption of by police, government agencies and businesses from stores to apartment complexes.

Many other researchers have shown how AI systems, which look for patterns in huge troves of data, will mimic the institutional biases embedded in the data they are learning from.

For instance, if AI systems are developed using images of mostly white men, the systems will work best in recognising white men.

Those disparities can sometimes be a matter of life or death: One recent study of the that enable self-driving cars to "see" the road shows they have a harder time detecting pedestrians with darker skin tones.

What's struck a chord about Boulamwini's work is her method of testing the systems created by well-known companies. She applies such systems to a skin-tone scale used by dermatologists, then names and shames those that show racial and gender bias.

Buolamwini, who's also founded a coalition of scholars, activists and others called the Algorithmic Justice League, has blended her scholarly investigations with activism.

"It adds to a growing body of evidence that facial recognition affects different groups differently," said Shankar Narayan, of the state, where the group has sought restrictions on the technology.

"Joy's work has been part of building that awareness." Amazon, whose CEO, Jeff Bezos, she emailed directly last summer, has responded by aggressively taking aim at her research methods.

A Buolamwini-led study published just over a year ago found disparities in how built by IBM, and the Chinese company classified people by gender.

Darker-skinned women were the most misclassified group, with error rates of up to 34.7 per cent. By contrast, the maximum error rate for lighter-skinned males was less than 1 per cent.

The study called for "urgent attention" to address the bias.

"I responded pretty much right away," said Ruchir Puri, of Research, describing an email he received from Buolamwini last year.

Since then, he said, "it's been a very fruitful relationship" that informed IBM's unveiling this year of a new 1 million-image database for better analysing the diversity of human faces. Previous systems have been overly reliant on what Buolamwini calls "pale male" image repositories.

Microsoft, which had the lowest error rates, declined comment.

Messages left with Megvii, which owns Face Plus Plus, weren't immediately returned.

Months after her first study, when Buolamwini worked with on a follow-up test, all three companies showed major improvements.

But this time they also added Amazon, which has sold the system it calls Rekognition to law enforcement agencies. The results, published in late January, showed badly misidentifying darker-hued women.

"We were surprised to see that Amazon was where their competitors were a year ago," Buolamwini said.

Amazon dismissed what it called Buolamwini's "erroneous claims" and said the study confused facial analysis with facial recognition, improperly measuring the former with techniques for evaluating the latter.

"The answer to anxieties over new technology is not to run 'tests' inconsistent with how the service is designed to be used, and to amplify the test's false and misleading conclusions through the media," Matt Wood, for Amazon's cloud-division, wrote in a January blog post.

Amazon declined requests for an interview.

"I didn't know their reaction would be quite so hostile," Buolamwini said recently in an interview at her MIT lab.

Coming to her defense Wednesday was a coalition of researchers, including AI pioneer Yoshua Bengio , recent winner of the Turing Award, considered the tech field's version of the Nobel Prize.

(This story has not been edited by Business Standard staff and is auto-generated from a syndicated feed.)

First Published: Thu, April 04 2019. 02:55 IST