Prejudice AI? Machine Learning Can Pick up Society’s Biases

The program picked up association biases nearly identical to those seen in human subjects.

 

We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion. A team of researchers at Princeton University’s engineering school have proven otherwise, in a new study. They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.  


This may not be too surprising after a Microsoft snafu in March last year, when a chatbot named Tat had to be taken off Twitter. After interacting with certain users, she began spouting racist remarks. It isn’t to say that AI is inherently flawed. It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to. In this sense, we’ll have to design such programs carefully to avoid allowing biases to slip past.

Arvind Narayanan was the co-author of this study. He’s an assistant professor of computer science and the Center for Information Technology Policy (CITP) at Princeton. Under him was Aylin Caliskan, the study’s lead author. She’s a postdoctoral research associate at Princeton. They both worked with colleague Joanna Bryson at University of Bath, also a co-author.

A chatbot Tat had to be taken off Twitter recently for “talking like a Nazi.” Getty Images.

While examining a program which was given access to languages online, what they found was, based on the patterns of wording and usage, inherent cultural biases could be passed along to the program. "Questions about fairness and bias in machine learning are tremendously important for our society," Narayanan said. "We have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."

To scan for biases, Caliskan and Bryson used an online version of the Implicit Association Test. This was developed through several social psychology studies at the University of Washington in the late 1990s. The test works like this, a human subject is given a pair of words on a computer screen and must respond to them in as little time as possible. Answers are expected to come in milliseconds. Shorter response times are found in similar concepts and longer times for dissimilar ones.

Participants would be given prompts such as “daisy” or “rose,” and insects such as “moth” or “ant.” These would have to be matched with concept words such as “love” or “caress,” or negative words such as “ugly” or “filth.” Usually, flowers were paired with the positive words and insects with negative ones.

AI is more of a reflection of us than first thought. Pixbaby.

For this experiment, researchers utilized a computer program called GloVe, an open-source version of the Implicit Association Test. Developed at Stanford, GloVe stands for Global Vectors for Word Representation. It’s very much like any program that would sit at the heart of machine learning, researchers say. The program represents the co-occurrence of words statistically, displayed in a 10-word text window. Words that appear nearer one another have a stronger association, while those farther away have a weaker one.

In a previous study, programmers at Stanford used the internet to expose GloVe to 840 billion words. Prof. Narayanan and colleagues examined word sets and their associations. They looked at words such as “scientists, programmer, engineer,” and “teacher, nurse, librarian,” and recorded the gender associated with each.

Innocuous relationships between words such as the insects and flowered were found. But more worrisome connections, surrounding race and gender, were also discovered. The algorithm picked up association biases nearly identical to those seen in human subjects in previous studies.

For instance, male names corresponded more strongly with words such as “salary” and “professional,” as well as family-related terms like “wedding” and “parents.” When researchers turned to race, they found that African-American names were associated with far more negative attributes that Caucasian ones.

AI will have to be programmed to embrace equality. Getty Images.

AI programs are now being used more and more to help humans with things like language translation, image categorization, and text searches. Last fall, Google Translate made headlines because its skill level is coming very close to that of human translators. While AI gets more embedded in the human experience, so will these biases, if they aren’t addressed.

Consider a translation from Turkish to English. Turkish uses the third person pronoun “o.” If one took "o bir doktor" and "o bir hemşire," it would translate to "he is a doctor" and "she is a nurse." So what can be done to identify and clear such stereotypes from AI programs?

Explicit coding to instruct machine learning to recognize and prevent cultural stereotypes is required. Researchers liken this to how parents and teachers help children recognize unfair practices, and instill in them a sense of equality.

Narayanan said:

The biases that we studied in the paper are easy to overlook when designers are creating systems. The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.

To find out what exactly is at stake, click here: 

LinkedIn meets Tinder in this mindful networking app

Swipe right to make the connections that could change your career.

Getty Images
Sponsored
Swipe right. Match. Meet over coffee or set up a call.

No, we aren't talking about Tinder. Introducing Shapr, a free app that helps people with synergistic professional goals and skill sets easily meet and collaborate.

Keep reading Show less

4 reasons Martin Luther King, Jr. fought for universal basic income

In his final years, Martin Luther King, Jr. become increasingly focused on the problem of poverty in America.

(Photo by J. Wilds/Keystone/Getty Images)
Politics & Current Affairs
  • Despite being widely known for his leadership role in the American civil rights movement, Martin Luther King, Jr. also played a central role in organizing the Poor People's Campaign of 1968.
  • The campaign was one of the first to demand a guaranteed income for all poor families in America.
  • Today, the idea of a universal basic income is increasingly popular, and King's arguments in support of the policy still make a good case some 50 years later.
Keep reading Show less

Dead – yes, dead – tardigrade found beneath Antarctica

A completely unexpected discovery beneath the ice.

(Goldstein Lab/Wkikpedia/Tigerspaws/Big Think)
Surprising Science
  • Scientists find remains of a tardigrade and crustaceans in a deep, frozen Antarctic lake.
  • The creatures' origin is unknown, and further study is ongoing.
  • Biology speaks up about Antarctica's history.
Keep reading Show less

Why I wear my life on my skin

For Damien Echols, tattoos are part of his existential armor.

Videos
  • In prison Damien Echols was known by his number SK931, not his name, and had his hair sheared off. Stripped of his identity, the only thing he had left was his skin.
  • This is why he began tattooing things that are meaningful to him — to carry a "suit of armor" made up the images of the people and objects that have significance to him, from his friends to talismans.
  • Echols believes that all places are imbued with divinity: "If you interact with New York City as if there's an intelligence behind... then it will behave towards you the same way."
Keep reading Show less