Prejudice AI? Machine Learning Can Pick up Society’s Biases

The program picked up association biases nearly identical to those seen in human subjects.

 

People cut out of a circuit board.
Circuit board silhouettes of people. Pixbaby.

We think of computers as emotionless automatons and artificial intelligence as stoic, zen-like programs, mirroring Mr. Spock, devoid of prejudice and unable to be swayed by emotion. A team of researchers at Princeton University’s engineering school have proven otherwise, in a new study. They say that AI picks up our innate biases about sex and race, even when we ourselves may be unaware of them. The results of this study were published in the journal Science.  


This may not be too surprising after a Microsoft snafu in March last year, when a chatbot named Tat had to be taken off Twitter. After interacting with certain users, she began spouting racist remarks. It isn’t to say that AI is inherently flawed. It just learns everything from us and as our echo, picks up the prejudices we’ve become deaf to. In this sense, we’ll have to design such programs carefully to avoid allowing biases to slip past.

Arvind Narayanan was the co-author of this study. He’s an assistant professor of computer science and the Center for Information Technology Policy (CITP) at Princeton. Under him was Aylin Caliskan, the study’s lead author. She’s a postdoctoral research associate at Princeton. They both worked with colleague Joanna Bryson at University of Bath, also a co-author.

A chatbot Tat had to be taken off Twitter recently for “talking like a Nazi.” Getty Images.

While examining a program which was given access to languages online, what they found was, based on the patterns of wording and usage, inherent cultural biases could be passed along to the program. "Questions about fairness and bias in machine learning are tremendously important for our society," Narayanan said. "We have a situation where these artificial-intelligence systems may be perpetuating historical patterns of bias that we might find socially unacceptable and which we might be trying to move away from."

To scan for biases, Caliskan and Bryson used an online version of the Implicit Association Test. This was developed through several social psychology studies at the University of Washington in the late 1990s. The test works like this, a human subject is given a pair of words on a computer screen and must respond to them in as little time as possible. Answers are expected to come in milliseconds. Shorter response times are found in similar concepts and longer times for dissimilar ones.

Participants would be given prompts such as “daisy” or “rose,” and insects such as “moth” or “ant.” These would have to be matched with concept words such as “love” or “caress,” or negative words such as “ugly” or “filth.” Usually, flowers were paired with the positive words and insects with negative ones.

AI is more of a reflection of us than first thought. Pixbaby.

For this experiment, researchers utilized a computer program called GloVe, an open-source version of the Implicit Association Test. Developed at Stanford, GloVe stands for Global Vectors for Word Representation. It’s very much like any program that would sit at the heart of machine learning, researchers say. The program represents the co-occurrence of words statistically, displayed in a 10-word text window. Words that appear nearer one another have a stronger association, while those farther away have a weaker one.

In a previous study, programmers at Stanford used the internet to expose GloVe to 840 billion words. Prof. Narayanan and colleagues examined word sets and their associations. They looked at words such as “scientists, programmer, engineer,” and “teacher, nurse, librarian,” and recorded the gender associated with each.

Innocuous relationships between words such as the insects and flowered were found. But more worrisome connections, surrounding race and gender, were also discovered. The algorithm picked up association biases nearly identical to those seen in human subjects in previous studies.

For instance, male names corresponded more strongly with words such as “salary” and “professional,” as well as family-related terms like “wedding” and “parents.” When researchers turned to race, they found that African-American names were associated with far more negative attributes that Caucasian ones.

AI will have to be programmed to embrace equality. Getty Images.

AI programs are now being used more and more to help humans with things like language translation, image categorization, and text searches. Last fall, Google Translate made headlines because its skill level is coming very close to that of human translators. While AI gets more embedded in the human experience, so will these biases, if they aren’t addressed.

Consider a translation from Turkish to English. Turkish uses the third person pronoun “o.” If one took "o bir doktor" and "o bir hemşire," it would translate to "he is a doctor" and "she is a nurse." So what can be done to identify and clear such stereotypes from AI programs?

Explicit coding to instruct machine learning to recognize and prevent cultural stereotypes is required. Researchers liken this to how parents and teachers help children recognize unfair practices, and instill in them a sense of equality.

Narayanan said:

The biases that we studied in the paper are easy to overlook when designers are creating systems. The biases and stereotypes in our society reflected in our language are complex and longstanding. Rather than trying to sanitize or eliminate them, we should treat biases as part of the language and establish an explicit way in machine learning of determining what we consider acceptable and unacceptable.

To find out what exactly is at stake, click here: 

This is what aliens would 'hear' if they flew by Earth

A Mercury-bound spacecraft's noisy flyby of our home planet.

Image source: sdecoret on Shutterstock/ESA/Big Think
Surprising Science
  • There is no sound in space, but if there was, this is what it might sound like passing by Earth.
  • A spacecraft bound for Mercury recorded data while swinging around our planet, and that data was converted into sound.
  • Yes, in space no one can hear you scream, but this is still some chill stuff.

First off, let's be clear what we mean by "hear" here. (Here, here!)

Sound, as we know it, requires air. What our ears capture is actually oscillating waves of fluctuating air pressure. Cilia, fibers in our ears, respond to these fluctuations by firing off corresponding clusters of tones at different pitches to our brains. This is what we perceive as sound.

All of which is to say, sound requires air, and space is notoriously void of that. So, in terms of human-perceivable sound, it's silent out there. Nonetheless, there can be cyclical events in space — such as oscillating values in streams of captured data — that can be mapped to pitches, and thus made audible.

BepiColombo

Image source: European Space Agency

The European Space Agency's BepiColombo spacecraft took off from Kourou, French Guyana on October 20, 2019, on its way to Mercury. To reduce its speed for the proper trajectory to Mercury, BepiColombo executed a "gravity-assist flyby," slinging itself around the Earth before leaving home. Over the course of its 34-minute flyby, its two data recorders captured five data sets that Italy's National Institute for Astrophysics (INAF) enhanced and converted into sound waves.

Into and out of Earth's shadow

In April, BepiColombo began its closest approach to Earth, ranging from 256,393 kilometers (159,315 miles) to 129,488 kilometers (80,460 miles) away. The audio above starts as BepiColombo begins to sneak into the Earth's shadow facing away from the sun.

The data was captured by BepiColombo's Italian Spring Accelerometer (ISA) instrument. Says Carmelo Magnafico of the ISA team, "When the spacecraft enters the shadow and the force of the Sun disappears, we can hear a slight vibration. The solar panels, previously flexed by the Sun, then find a new balance. Upon exiting the shadow, we can hear the effect again."

In addition to making for some cool sounds, the phenomenon allowed the ISA team to confirm just how sensitive their instrument is. "This is an extraordinary situation," says Carmelo. "Since we started the cruise, we have only been in direct sunshine, so we did not have the possibility to check effectively whether our instrument is measuring the variations of the force of the sunlight."

When the craft arrives at Mercury, the ISA will be tasked with studying the planets gravity.

Magentosphere melody

The second clip is derived from data captured by BepiColombo's MPO-MAG magnetometer, AKA MERMAG, as the craft traveled through Earth's magnetosphere, the area surrounding the planet that's determined by the its magnetic field.

BepiColombo eventually entered the hellish mangentosheath, the region battered by cosmic plasma from the sun before the craft passed into the relatively peaceful magentopause that marks the transition between the magnetosphere and Earth's own magnetic field.

MERMAG will map Mercury's magnetosphere, as well as the magnetic state of the planet's interior. As a secondary objective, it will assess the interaction of the solar wind, Mercury's magnetic field, and the planet, analyzing the dynamics of the magnetosphere and its interaction with Mercury.

Recording session over, BepiColombo is now slipping through space silently with its arrival at Mercury planned for 2025.

Learn the Netflix model of high-performing teams

Erin Meyer explains the keeper test and how it can make or break a team.

Videos
  • There are numerous strategies for building and maintaining a high-performing team, but unfortunately they are not plug-and-play. What works for some companies will not necessarily work for others. Erin Meyer, co-author of No Rules Rules: Netflix and the Culture of Reinvention, shares one alternative employed by one of the largest tech and media services companies in the world.
  • Instead of the 'Rank and Yank' method once used by GE, Meyer explains how Netflix managers use the 'keeper test' to determine if employees are crucial pieces of the larger team and are worth fighting to keep.
  • "An individual performance problem is a systemic problem that impacts the entire team," she says. This is a valuable lesson that could determine whether the team fails or whether an organization advances to the next level.
Keep reading Show less
Photo by Martin Adams on Unsplash
Culture & Religion
She was walking down the forest path with a roll of white cloth in her hands. It was trailing behind her like a long veil.
Keep reading Show less
Scroll down to load more…
Quantcast