The Virtue of Stereotypes

Question: Can stereotypes be helpful?

Sam Gosling: Yeah. Well, we have to use stereotypes because we don't have the time to treat every instance as though it's really a new event. It's much more efficient to treat it as a class of broader events about—you know, where we already have information on those things. I've never sat on this chair before, you know, but I didn't test it. You know, how did I know that this was—would be able to hold me and wouldn't just melt, or—who knows? I mean, there are all kinds of things that I didn't know about this particular chair, but I made, you know, an evil, oppressive stereotype—I use that: evil, oppressive stereotype—and I just treated this chair like an other chair. But it turns out that that was a pretty good stereotype. This—thinking that this chair would be comfortable and that I could sit on it served me as a pretty good guide to my behavior. And not only did it do that, it took me a microsecond. I didn't have to go through and check the chair beforehand. So where stereotypes have a kernel of truth to them, then they are useful. Of course, where we get into trouble is when we're using stereotypes that don't have a kernel of truth to them. And that's something that's often very hard to know. So some of our stereotypes it turns out have some validity, some don't. And we can't always know which they are. And even if some—of course, even if some of those stereotypes do have a kernel of truth to them, they may not have a kernel of truth to them for the right reasons. It doesn't mean there's no—it could be all kinds of horrible processes that led to the fact that that stereotype is valid, but that doesn't—but—so I'm not endorsing the fact that some of these stereotypes, even if they're true, are indeed oppressive. Some really are. But it's nonetheless from an information processing stance useful to use them.

The other thing about—I just want to say about stereotypes is, it makes sense to use them, but to use them as a first guess. But then it's also important to be able to update that with real information as you get it. And so that's what we find when people are looking at people's spaces, is that people do use the stereotypes when they go into people's spaces; they make generalizations; they say, oh, okay, this space clearly belongs to a female. Therefore I'm going to activate some of my stereotypes based on differences between males and females. And that's a good place to start, but what's really important is that you are quick to abandon that if there's other information. And that's the danger of stereotypes, is, the danger of stereotypes is that we use them and then we stick too closely to them, and we're unwilling to revise our impressions afterwards, on the basis of new information. And the other danger is that we think those stereotypes may exist for some good reason, whereas it may not be a good reason for their existence.

Recorded on November 6, 2009
Interviewed by Austin Allen

"Stereotyping" has become a dirty word, but as psychologist Sam Gosling explains, we all do it—and need to.

Videos
  • A huge segment of America's population — the Baby Boom generation — is aging and will live longer than any American generation in history.
  • The story we read about in the news? Their drain on social services like Social Security and Medicare.
  • But increased longevity is a cause for celebration, says Ashton Applewhite, not doom and gloom.


Do calories even count? Research counters a longstanding assumption.

The calorie is the basic unit of measure of food — and it might be off.

Tourists enjoy a traditional 'Zapiekanka' at Krakow's Main Square. On Wednesday, March 6, 2019, in Krakow, Poland. (Photo by Artur Widak/NurPhoto via Getty Images)
Surprising Science
  • In a new article in 1843, Peter Wilson argues that counting calories is an outdated form of weight management.
  • Research shows that labels are up to 20 percent off true caloric totals; 70 percent in frozen processed foods.
  • Not all digestive systems are created equally; humans process foods at different rates under varying conditions.
Keep reading Show less

Yale scientists restore brain function to 32 clinically dead pigs

Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.

Still from John Stephenson's 1999 rendition of Animal Farm.
Surprising Science
  • Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
  • They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
  • The research raises many ethical questions and puts to the test our current understanding of death.

The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?

But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.

What's dead may never die, it seems

The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. Think a dialysis machine for the mind. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.

BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.

The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.

As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.

The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.

"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.

An ethical gray matter

Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.

The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.

Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.

Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?

"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."

One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.

The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.

"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.

It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.

Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?

The dilemma is unprecedented.

Setting new boundaries

Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."

She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.