Translate to multiple languages

Subscribe to my Email updates

https://feedburner.google.com/fb/a/mailverify?uri=helgeScherlundelearning
Enjoy what you've read, make sure you subscribe to my Email Updates

Friday, April 14, 2017

AI picks up racial and gender biases when learning from what humans write | The Verge

Photo: Angela Chen
"There is no objectivity" according to Angela Chen, Verge's science reporter focusing on medicine, AI, and energy.

Photo: The Verge

Artificial intelligence picks up racial and gender biases when learning language from text, researchers say. Without any supervision, a machine learning algorithm learns to associate female names more with family words than career words, and black names as being more unpleasant than white names.

For a study published today in Science, researchers tested the bias of a common AI model, and then matched the results against a well-known psychological test that measures bias in humans. The team replicated in the algorithm all the psychological biases they tested, according to study co-author Aylin Caliskan, a post-doc at Princeton University. Because machine learning algorithms are so common, influencing everything from translation to scanning names on resumes, this research shows that the biases are pervasive, too.

“Language is a bridge to ideas, and a lot of algorithms are built on language in the real world,” says Megan Garcia, the director of New America’s California branch who has written about this so-called algorithmic bias. “So unless an alg is making a decision based only on numbers, this finding is going to be important.”

An algorithm is a set of instructions that humans write to help computers learn. Think of it like a recipe, says Zachary Lipton, an AI researcher at UC San Diego who was not involved in the study. Because algorithms use existing materials — like books or text on the internet — it’s obvious that AI can pick up biases if the materials themselves are biased. (For example, Google Photos tagged black users as gorillas.) We’ve known for a while, for instance, that language algorithms learn to associate the word “man” with “professor” and the word “woman” with “assistant professor.” But this paper is interesting because it incorporates previous work done in psychology on human biases, Lipton says.

For today’s study, Caliskan’s team created a test that resembles the Implicit Association Test, which is commonly used in psychology to measure how biased people are (though there has been some controversy over its accuracy). In the IAT, subjects are presented with two images — say, a white man and a black man — and words like “pleasant” or “unpleasant.” The IAT calculates how quickly you match up “white man” and “pleasant” versus “black man” and “pleasant,” and vice versa. The idea is that the longer it takes you to match up two concepts, the more trouble you have associating them. 

Source: The Verge