Everyone encounters stereotypes. But what you do afterward says something about you
There is a lot of debate in the scientific community over what exactly intelligence is. We can talk about IQ. That’s one thing that’s absolutely measurable. But beyond that things get hazy. According to Harvard’s Howard Gardner there are multiple intelligences. In an elemental sense, one of the earliest and most comprehensive explanations is the ability to recognize patterns.
The human brain is actually the world’s most complex pattern recognition system. Previous research finds that those who are skillful in noticing patterns tend to earn more money, perform better at their jobs, and take better care of their health. In addition, advanced pattern detection may make one savvier in spotting opportunities and less likely to identify with authoritarian ideology.
“Pattern-matching” helps us to discern the feelings of others, make plans, learn a new language, and much more. The problem is, everything has a downside. Those who have excellent pattern recognition tend to use it to evaluate other humans, making this type prone to stereotyping.
Certain cognitive styles may be prone to social stereotypes. Flickr.
In a series of studies recently performed at New York University, researchers determined that those who were better at pattern-matching, were also more likely to recognize social stereotypes and apply them. There was a saving grace. These types were also more willing to change their attitude or position, in light of new information.
The lead author, David Lick, is a postdoctoral researcher in NYU’s Department of Psychology. Lick, along with Assistant Professors Jonathan Freeman and Adam Alter, joined forces to find out how pattern detectors operate when they come into contact with social stereotypes. The authors wrote, “Because pattern detection is a core component of human intelligence, people with superior cognitive abilities may be equipped to efficiently learn and use stereotypes about social groups.”
Researchers recruited 1,257 participants online through Amazon’s Mechanical Turk. This is where participants agree to become subjects in social science experiments, in return for some form of compensation. Participants were put through six experiments in all. In the first two, they saw pictures of either blue or yellow aliens with varying dimensional differences, such as a different face shapes, eye sizes, or ear sizes.
Certain types may be more likely to act on social stereotypes without being aware of it. Getty Images.
Recruits were told that blue aliens are “unfriendly.” They take part in rude behavior, such as spitting in another's face. Meanwhile, yellow aliens are “friendly.” They’d do things like buying a bouquet of flowers for another. In the third leg, respondents were made to take the Raven's Advanced Progressive Matrices, a pattern recognition assessment.
In the fourth segment, they underwent a memory test. Participants were told to match faces with behaviors. Among those the viewers encountered were some blue and yellow faces they’d never seen before. What the study showed was that pattern detectors were more likely to attribute blue faces to unfriendly behavior and yellow ones to the friendly kind. Researcher’s say, this constitutes a learned behavior.
In the next test, respondents encountered human faces. They were all male and had either a wide or narrow nose. For one set of participants, the wide-nosed faces were given unfriendly traits and the thin-nosed, friendly ones. In the second group, the roles were reversed. The example given of unfriendly behavior was laughing at a homeless person. While the positive example was bringing a bouquet of flowers to a sick friend.
We encounter social stereotypes all the time. How we internalize it is being uncovered. Getty Images.
Next, participants were told that they’d take a break from the study, which was misleading. They were asked if they’d like to play a game. One aspect was they’d have to lend out money to other participants. Players chose their avatar from a group of faces and played for 12 rounds. In each, they partnered up with a different looking avatar.
Participants didn’t know it, but they weren’t playing with real partners. Instead, researchers were selecting avatars to pair them up with, to see if they operated under any sort of bias. Respondents who did better with pattern recognition often gave less money to those avatars whose noses they had learned to stereotype. Yet, when they encountered information that bucked the bias, pattern-detectors altered the way they played the game.
In the last simulation, researchers looked at real-world stereotypes related to traditional male-oriented traits such as being authoritative and female-oriented ones such as being submissive. Pattern detectors who were shown repeated examples that women actually were more authoritative, showed a significant decrease in stereotyping behavior.
Lick, Freedman, and Alter say that specific advanced cognitive abilities may have a tendency to come with certain shortcomings. Besides this bias toward stereotyping, pattern-matching types are also more prone to OCD-like symptoms and behavior. Fortunately, the study also shows that this type may be the most amenable to bias.
Pattern detectors may be the most amenable to stereotyping. Getty Images.
David Lick responded to some questions I had about this study via email. He told me that he and colleagues can accurately predict how likely participants are to apply stereotypes if given the chance.
In fact, social psychologists have done quite a bit of work on the topic using implicit measures similar to the ones described in our paper. There's also been some work on methods to reduce stereotyping, though the literature is considerably smaller. Irene Blair (2002) and Kerry Kawakami (2005, 2007) have done some of the best work on counter-stereotype training procedures, and have shown some success in reducing explicit / implicit stereotyping. However, a number of questions still remain about the long-term effects of such training, and I think we need to do more research before making broad claims about the efficacy of these programs.
I asked if someday, we could use these findings to develop a sort of bias screening tool. But Lick said he wasn’t comfortable with that for a couple of reasons:
(1) These findings are restricted to fictional groups, “which could differ from real-world stereotypes in a number of important ways.”
(2) It's not clear that such a tool would even be useful. “Although there is a statistically reliable association between pattern detection and stereotyping, that doesn't mean there's a 1:1 mapping or that every good pattern detector will stereotype in every situation,” he said. Such a tool would only tell you if someone was likely to stereotype or not, which could lead to serious problems such as damaged interpersonal relationships or reputations by causing false accusations. “Even if the intentions were good, we'd need a lot more research with more diverse groups of people before beginning to think about a screening tool,” Lick said.
Still, these findings are paving the way for future research, allowing us to come to understand different cognitive styles in a deeper and more comprehensive way. From there, we could develop an anti-stereotyping program complete with different tracks, each tailored to reach a particular cognitive style.
To learn more about the nature of stereotyping and how we humans go about it, click here:
Artificial intelligence (AI) is not nearly as smart as we want it to be. Because we are not nearly as smart as we want to be.
Artificial intelligence (AI) is not nearly as smart as we want it to be.
That’s the biggest takeaway from a new experiment out of MIT’s Computer Science and Artificial Intelligence Lab. A team of researchers sought to improve its AI’s machine learning skills by watching 2 million videos and predicting what would happen next. “Teaching AI to anticipate the future can help it comprehend the present,” New Scientist reports. That ability to anticipate the future gives AI much needed context for everyday tasks, as researcher Carl Vondrick told New Scientist: “if you’re about to sit down, you don’t want a robot to pull the chair out from underneath you.” That ability to correctly anticipate the future will also “allow machines to not take actions that might hurt people or help people not hurt themselves,” according to VICE.
Here’s how the MIT team did it:
Credit: MIT CSAIL
This technique is an important step forward for making smarter AI, but it’s not the only way artificial intelligence learns. As we’ve said before, the goal of creating artificial intelligence is to create a system that converses, learns, and solves problems on its own. AI first used algorithms to solve puzzles and make rational decisions. Now, it uses deep learning techniques like the one from this MIT experiment to identify and analyze patterns so it can predict real-life outcomes. Yet for all that learning, AI is only as smart as a “lobotomized, mentally challenged cockroach,” as Michio Kaku explained to us here:
As smart as artificial intelligence is becoming, it learns in different ways than people. Generally speaking, the human brain has many different ways to learn the same information, but “the most effective strategies for learning depend on what kind of learning is desired and toward what ends,” according to research out of Stanford University. In short, some ways of learning are better at helping you learn specific kinds of information than others. We’ve got a whole primer on how to learn here, but here’s a quick summary:
Rather than memorize facts, your brain retains the information best if you create explanations for why those facts are true. That gives you context.
Rather than struggle to understand an abstract concept, articulate why it’s difficult. That can help you identify where you’re stuck and help you create a solution.
Rather than cram for a test, learn a little bit of the material over a longer period of time. That helps transfer the information into long-term memory storage.
AI does not learn in any of these ways -- yet. Until it can, we have nothing to fear from it. Except more horrifying short videos.
They may look odd, but it’s all part of Google’s plan to solve a huge issue in machine learning: recognizing objects in images.
When Google asked its neural network to dream, the machine begin to generating some pretty wild images. They may look odd, but it’s all part of Google’s plan to solve a huge issue in machine learning: recognizing objects in images.
To be clear, Google’s software engineers didn’t ask a computer to dream, but they did ask its neural network to alter the images based on an original photo they fed into it, by applying layers. This was all part of their Deep Dream program.
The purpose was to make it better at finding patterns, which computers are none too good at. So, engineers started by “teaching” the neural network to recognize certain objects by giving it 1.2 million images, complete with object classifications the computer could understand.
These classifications allowed Google’s AI to learn to detect the different qualities of certain objects in an image, like a dog and a fork. But Google’s engineers wanted to go one step further, which is where Deep Dream comes in, which allowed the neural network to add those hallucinogenic qualities to images.
Google wanted to make its neural network better at detection to the point where it could pick out other objects in an image that may not contain that object (think of it as seeing the outline of a dog in the clouds). Deep Dream gave the computer the ability to change the rules and parameters of the images, which in turn allowed Google’s AI to recognize objects the images didn’t necessarily contain. So, an image might contain an image of a foot, but when it examined a few pixels of that image, it may have seen the outline of what looked like a dog’s nose.
So, when researchers began to ask its neural network to tell them what other objects they might be able to see in an image of a mountain, tree, or plant, it came up with these interpretations:
(Photo Credit: Michael Tyka/Google)
“The techniques presented here help us understand and visualize how neural networks are able to carry out difficult classification tasks, improve network architecture, and check what the network has learned during training,” software engineers Alexander Mordvintsev and Christopher Olah, and intern Mike Tyka wrote in a post about Deep Dream. “It also makes us wonder whether neural networks could become a tool for artists—a new way to remix visual concepts—or perhaps even shed a little light on the roots of the creative process in general.”
Just for fun, Google has opened up the tool to the public and you can generate your own Deep Dream art here: deepdreamgenerator.com
We're only seeing a fraction of the world around us. Amy Herman teaches the art of perception; if you're game to test your visual intelligence, take one of her perception challenges here.
Sometimes it’s not what is there, it’s what isn’t. Let’s rewind.
Amy Herman is an art historian, lawyer, and the author of Visual Intelligence, a book that explains how altering and sharpening your perspective can change your life, both professionally and personally. Herman created, designed and conducts all sessions of ‘The Art of Perception’, an education program that was initially used to help medical students improve their observation skills. Sometimes in diagnostics, you’re not looking for what you can see, but what you can’t – this is called the 'pertinent negative'. The same goes for investigations, and so the program was adapted for the New York City Police Department.
Try one of Herman’s perception tests, which she runs you through in the video above. Better perception and communication – two key takeaways of Herman’s visual intelligence lessons – can save money, reputations and lives in business, and can also be an incredible asset in our personal lives when it comes to interpreting situations, noticing important details, and having open and effective communication.
The example above, which uses René Magritte’s artwork, is an incredible reminder of how much detail is around us that we don’t register and how we can be more conscious in our perception.
The Baader-Meinhof phenomenon backs this up. Baader-Meinhof is a cognitive bias also known as frequency illusion, where once you see or learn something – an unfamiliar word or new visual symbol for example – that thing keeps appearing over and over everywhere you go, where before it was never there.
But it was always there, you just never saw it. This isn’t some mystical occurrence or a series of "freaky" coincidences; we fail to notice thousands of pieces of information every day, and it’s only when our attention is deliberately drawn to something new that it registers, and our brains – incredible pattern-recognition machines that they are – then identify and favor that symbol or word when it is anywhere in our proximity.
There is more to discover in the world than is ever possible, but by enhancing your visual intelligence and perception skills, you can certainly make a more sizable dent.
Amy Herman is the author of Visual Intelligence:Sharpen Your Perception, Change Your Life.