Prejudice AI? Machine Learning Can Pick up Society’s Biases

The program picked up association biases nearly identical to those seen in human subjects.

 

People cut out of a circuit board.
Circuit board silhouettes of people. Pixbaby.

We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion. A team of researchers at Princeton University’s engineering school have proven otherwise, in a new study. They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.  


This may not be too surprising after a Microsoft snafu in March last year, when a chatbot named Tat had to be taken off Twitter. After interacting with certain users, she began spouting racist remarks. It isn’t to say that AI is inherently flawed. It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to. In this sense, we’ll have to design such programs carefully to avoid allowing biases to slip past.

Arvind Narayanan was the co-author of this study. He’s an assistant professor of computer science and the Center for Information Technology Policy (CITP) at Princeton. Under him was Aylin Caliskan, the study’s lead author. She’s a postdoctoral research associate at Princeton. They both worked with colleague Joanna Bryson at University of Bath, also a co-author.

A chatbot Tat had to be taken off Twitter recently for “talking like a Nazi.” Getty Images.

While examining a program which was given access to languages online, what they found was, based on the patterns of wording and usage, inherent cultural biases could be passed along to the program. "Questions about fairness and bias in machine learning are tremendously important for our society," Narayanan said. "We have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."

To scan for biases, Caliskan and Bryson used an online version of the Implicit Association Test. This was developed through several social psychology studies at the University of Washington in the late 1990s. The test works like this, a human subject is given a pair of words on a computer screen and must respond to them in as little time as possible. Answers are expected to come in milliseconds. Shorter response times are found in similar concepts and longer times for dissimilar ones.

Participants would be given prompts such as “daisy” or “rose,” and insects such as “moth” or “ant.” These would have to be matched with concept words such as “love” or “caress,” or negative words such as “ugly” or “filth.” Usually, flowers were paired with the positive words and insects with negative ones.

AI is more of a reflection of us than first thought. Pixbaby.

For this experiment, researchers utilized a computer program called GloVe, an open-source version of the Implicit Association Test. Developed at Stanford, GloVe stands for Global Vectors for Word Representation. It’s very much like any program that would sit at the heart of machine learning, researchers say. The program represents the co-occurrence of words statistically, displayed in a 10-word text window. Words that appear nearer one another have a stronger association, while those farther away have a weaker one.

In a previous study, programmers at Stanford used the internet to expose GloVe to 840 billion words. Prof. Narayanan and colleagues examined word sets and their associations. They looked at words such as “scientists, programmer, engineer,” and “teacher, nurse, librarian,” and recorded the gender associated with each.

Innocuous relationships between words such as the insects and flowered were found. But more worrisome connections, surrounding race and gender, were also discovered. The algorithm picked up association biases nearly identical to those seen in human subjects in previous studies.

For instance, male names corresponded more strongly with words such as “salary” and “professional,” as well as family-related terms like “wedding” and “parents.” When researchers turned to race, they found that African-American names were associated with far more negative attributes that Caucasian ones.

AI will have to be programmed to embrace equality. Getty Images.

AI programs are now being used more and more to help humans with things like language translation, image categorization, and text searches. Last fall, Google Translate made headlines because its skill level is coming very close to that of human translators. While AI gets more embedded in the human experience, so will these biases, if they aren’t addressed.

Consider a translation from Turkish to English. Turkish uses the third person pronoun “o.” If one took "o bir doktor" and "o bir hemşire," it would translate to "he is a doctor" and "she is a nurse." So what can be done to identify and clear such stereotypes from AI programs?

Explicit coding to instruct machine learning to recognize and prevent cultural stereotypes is required. Researchers liken this to how parents and teachers help children recognize unfair practices, and instill in them a sense of equality.

Narayanan said:

The biases that we studied in the paper are easy to overlook when designers are creating systems. The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.

To find out what exactly is at stake, click here: 

‘Designer baby’ book trilogy explores the moral dilemmas humans may soon create

How would the ability to genetically customize children change society? Sci-fi author Eugene Clark explores the future on our horizon in Volume I of the "Genetic Pressure" series.

Surprising Science
  • A new sci-fi book series called "Genetic Pressure" explores the scientific and moral implications of a world with a burgeoning designer baby industry.
  • It's currently illegal to implant genetically edited human embryos in most nations, but designer babies may someday become widespread.
  • While gene-editing technology could help humans eliminate genetic diseases, some in the scientific community fear it may also usher in a new era of eugenics.
Keep reading Show less

Massive 'Darth Vader' isopod found lurking in the Indian Ocean

The father of all giant sea bugs was recently discovered off the coast of Java.

A close up of Bathynomus raksasa

SJADE 2018
Surprising Science
  • A new species of isopod with a resemblance to a certain Sith lord was just discovered.
  • It is the first known giant isopod from the Indian Ocean.
  • The finding extends the list of giant isopods even further.
Keep reading Show less

These are the world’s greatest threats in 2021

We look back at a year ravaged by a global pandemic, economic downturn, political turmoil and the ever-worsening climate crisis.

Luis Ascui/Getty Images
Politics & Current Affairs

Billions are at risk of missing out on the digital leap forward, as growing disparities challenge the social fabric.

Keep reading Show less

Columbia study finds new way to extract energy from black holes

A new study explains how a chaotic region just outside a black hole's event horizon might provide a virtually endless supply of energy.

Credit: NASA's Goddard Space Flight Center
Surprising Science
  • In 1969, the physicist Roger Penrose first proposed a way in which it might be possible to extract energy from a black hole.
  • A new study builds upon similar ideas to describe how chaotic magnetic activity in the ergosphere of a black hole may produce vast amounts of energy, which could potentially be harvested.
  • The findings suggest that, in the very distant future, it may be possible for a civilization to survive by harnessing the energy of a black hole rather than a star.
Keep reading Show less
Mind & Brain

A psychiatric diagnosis can be more than an unkind ‘label’

A popular and longstanding wave of thought in psychology and psychotherapy is that diagnosis is not relevant for practitioners in those fields.

Scroll down to load more…
Quantcast