Once a week.
Subscribe to our weekly newsletter.
Intelligent People Deal with Stereotypes Differently Than Others, Study Finds
Everyone encounters stereotypes. But what you do afterward says something about you
There is a lot of debate in the scientific community over what exactly intelligence is. We can talk about IQ. That’s one thing that’s absolutely measurable. But beyond that things get hazy. According to Harvard’s Howard Gardner there are multiple intelligences. In an elemental sense, one of the earliest and most comprehensive explanations is the ability to recognize patterns.
The human brain is actually the world’s most complex pattern recognition system. Previous research finds that those who are skillful in noticing patterns tend to earn more money, perform better at their jobs, and take better care of their health. In addition, advanced pattern detection may make one savvier in spotting opportunities and less likely to identify with authoritarian ideology.
“Pattern-matching” helps us to discern the feelings of others, make plans, learn a new language, and much more. The problem is, everything has a downside. Those who have excellent pattern recognition tend to use it to evaluate other humans, making this type prone to stereotyping.
Certain cognitive styles may be prone to social stereotypes. Flickr.
In a series of studies recently performed at New York University, researchers determined that those who were better at pattern-matching, were also more likely to recognize social stereotypes and apply them. There was a saving grace. These types were also more willing to change their attitude or position, in light of new information.
The lead author, David Lick, is a postdoctoral researcher in NYU’s Department of Psychology. Lick, along with Assistant Professors Jonathan Freeman and Adam Alter, joined forces to find out how pattern detectors operate when they come into contact with social stereotypes. The authors wrote, “Because pattern detection is a core component of human intelligence, people with superior cognitive abilities may be equipped to efficiently learn and use stereotypes about social groups.”
Researchers recruited 1,257 participants online through Amazon’s Mechanical Turk. This is where participants agree to become subjects in social science experiments, in return for some form of compensation. Participants were put through six experiments in all. In the first two, they saw pictures of either blue or yellow aliens with varying dimensional differences, such as a different face shapes, eye sizes, or ear sizes.
Certain types may be more likely to act on social stereotypes without being aware of it. Getty Images.
Recruits were told that blue aliens are “unfriendly.” They take part in rude behavior, such as spitting in another's face. Meanwhile, yellow aliens are “friendly.” They’d do things like buying a bouquet of flowers for another. In the third leg, respondents were made to take the Raven's Advanced Progressive Matrices, a pattern recognition assessment.
In the fourth segment, they underwent a memory test. Participants were told to match faces with behaviors. Among those the viewers encountered were some blue and yellow faces they’d never seen before. What the study showed was that pattern detectors were more likely to attribute blue faces to unfriendly behavior and yellow ones to the friendly kind. Researcher’s say, this constitutes a learned behavior.
In the next test, respondents encountered human faces. They were all male and had either a wide or narrow nose. For one set of participants, the wide-nosed faces were given unfriendly traits and the thin-nosed, friendly ones. In the second group, the roles were reversed. The example given of unfriendly behavior was laughing at a homeless person. While the positive example was bringing a bouquet of flowers to a sick friend.
We encounter social stereotypes all the time. How we internalize it is being uncovered. Getty Images.
Next, participants were told that they’d take a break from the study, which was misleading. They were asked if they’d like to play a game. One aspect was they’d have to lend out money to other participants. Players chose their avatar from a group of faces and played for 12 rounds. In each, they partnered up with a different looking avatar.
Participants didn’t know it, but they weren’t playing with real partners. Instead, researchers were selecting avatars to pair them up with, to see if they operated under any sort of bias. Respondents who did better with pattern recognition often gave less money to those avatars whose noses they had learned to stereotype. Yet, when they encountered information that bucked the bias, pattern-detectors altered the way they played the game.
In the last simulation, researchers looked at real-world stereotypes related to traditional male-oriented traits such as being authoritative and female-oriented ones such as being submissive. Pattern detectors who were shown repeated examples that women actually were more authoritative, showed a significant decrease in stereotyping behavior.
Lick, Freedman, and Alter say that specific advanced cognitive abilities may have a tendency to come with certain shortcomings. Besides this bias toward stereotyping, pattern-matching types are also more prone to OCD-like symptoms and behavior. Fortunately, the study also shows that this type may be the most amenable to bias.
Pattern detectors may be the most amenable to stereotyping. Getty Images.
David Lick responded to some questions I had about this study via email. He told me that he and colleagues can accurately predict how likely participants are to apply stereotypes if given the chance.
In fact, social psychologists have done quite a bit of work on the topic using implicit measures similar to the ones described in our paper. There's also been some work on methods to reduce stereotyping, though the literature is considerably smaller. Irene Blair (2002) and Kerry Kawakami (2005, 2007) have done some of the best work on counter-stereotype training procedures, and have shown some success in reducing explicit / implicit stereotyping. However, a number of questions still remain about the long-term effects of such training, and I think we need to do more research before making broad claims about the efficacy of these programs.
I asked if someday, we could use these findings to develop a sort of bias screening tool. But Lick said he wasn’t comfortable with that for a couple of reasons:
(1) These findings are restricted to fictional groups, “which could differ from real-world stereotypes in a number of important ways.”
(2) It's not clear that such a tool would even be useful. “Although there is a statistically reliable association between pattern detection and stereotyping, that doesn't mean there's a 1:1 mapping or that every good pattern detector will stereotype in every situation,” he said. Such a tool would only tell you if someone was likely to stereotype or not, which could lead to serious problems such as damaged interpersonal relationships or reputations by causing false accusations. “Even if the intentions were good, we'd need a lot more research with more diverse groups of people before beginning to think about a screening tool,” Lick said.
Still, these findings are paving the way for future research, allowing us to come to understand different cognitive styles in a deeper and more comprehensive way. From there, we could develop an anti-stereotyping program complete with different tracks, each tailored to reach a particular cognitive style.
To learn more about the nature of stereotyping and how we humans go about it, click here:
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
A simple trick allowed marine biologists to prove a long-held suspicion.
- It's long been suspected that sharks navigate the oceans using Earth's magnetic field.
- Sharks are, however, difficult to experiment with.
- Using magnetism, marine biologists figured out a clever way to fool sharks into thinking they're somewhere that they're not.
For some time, scientists have suspected that sharks belong among the growing number of animals known to navigate using Earth's magnetic field. Testing anything with a shark, though, requires some care.
The key was selecting the right candidate. Keller and his colleagues chose the bonnethead shark, Sphyrna tiburo, a small critter that summers at Turkey Point Shoal off the coast of the Florida State University Coastal and Marine Laboratory with which Keller is affiliated.
Bonnetheads elsewhere have been known to complete 620-mile roundtrip migrations. As the lab's Dean Grubbs puts it, "That's not bad for a shark that is only two to three feet long. The question is how do they find their way back to that same estuary year after year." There's a report of a great white shark migrating between two locations, one in South Africa and another in Australia, year after year.
The research is published in Current Biology.
Keller and his team rounded up 20 local juvenile bonnetheads and transported them into a holding tank at the marine lab. For the tests, the researchers simulated three real-world magnetic fields. As the various magnetic fields were activated, the sharks' movements were captured by GoPro cameras and their average swimming orientations calculated by software.
The first simulation, serving as a control, mimicked the magnetic field of the nearby shoal from which the sharks had been captured. When this field was activated, the sharks essentially acted like they were "home," just swimming around as they do.
A second field was the magnetic equivalent of a location 600 kilometers south of the lab within the Gulf of Mexico. When this field was activated, the sharks, apparently mistaking themselves for being far south in the Gulf, began swimming northward toward the shoal.
The opposite occurred with a field standing in for a location in continental North America 600 km north of their home shoal — the sharks began swimming southward.
"For 50 years," says Keller, "scientists have hypothesized that sharks use the magnetic field as a navigational aid. This theory has been so popular because sharks, skates, and rays have been shown to be very sensitive to magnetic fields. They have also been trained to react to unique geomagnetic signatures, so we know they are capable of detecting and reacting to variation in the magnetic field."
His team's experiments confirm what's long been suspected, Keller says: "Sharks use map-like information from the geomagnetic field as a navigational aid. This ability is useful for navigation and possibly maintaining population structure."
A machine learning system lets visitors at a Kandinsky exhibition hear the artwork.
Have you ever heard colors?
As part of a new exhibition, the worlds of culture and technology collide, bringing sound to the colors of abstract art pioneer Wassily Kandinsky.
Kandinsky had synesthesia, where looking at colors and shapes causes some with the condition to hear associated sounds. With the help of machine learning, virtual visitors to the Sounds Like Kandinsky exhibition, a partnership project by Centre Pompidou in Paris and Google Arts & Culture, can have an aural experience of his art.
An eye for music
Kandinsky's synesthesia is thought to have heavily influenced his painting. Seeing yellow summoned up trumpets, evoking emotions like cheekiness; reds produced violins portraying restlessness; while organs representing heavenliness he associated with blues, according to the exhibition notes.
Virtual visitors are invited to take part in an experiment called Play a Kandinsky, which allows them to see and hear the world through the artist's eyes.
Kandinsky's synesthesia is thought to have heavily influenced his 1925 painting Yellow, Red, Blue.Image: Guillaume Piolle/Wikimedia Commons
In 1925, the artist's masterpiece, "Yellow, Red, Blue", broke new ground in the world of abstract art, guiding the viewer from left to right with shifting shapes and shades. Almost a century after it was painted, Google's interactive tool lets visitors click different parts of the artwork to journey through the artist's description of the colors, associated sounds and moods that inspired the work.
But Google's new toy is not the only tool developed to enhance the artistic experience.
Artist Neil Harbisson has developed an artificial way to emulate Kandinsky by turning colors into sounds. He has a rare form of color blindness and sees the world in greyscale. But a smart antenna attached to his head translates dominant colors into musical notes, creating a real-world soundtrack of what's in front of him. The invention could open up a new world for people who are color blind.
A new study suggests that private prisons hold prisoners for a longer period of time, wasting the cost savings that private prisons are supposed to provide over public ones.