Study: Combative People "Remember" Hostile Acts That They Didn't Commit
What kind of people confess to crimes they didn't commit? You might imagine they're sleepless and terrified, with cops telling them there's already proof of their guilt. And you'd be right (in experiments, people told there's video of their "crime" confess at very high rates). But susceptibility to pressure varies from person to person. People who are young, depressed or exceptionally respectful of authority have been found to be more prone to false confessions. Now, this study proposes a new high-risk group: Aggressive people, it found, can be convinced that they committed hostile acts that never actually happened.
Though courts continue to presume that people know what the heck they're talking about when they describe their experiences, researchers have been showing for some time that false memories are easy to plant. This leads to eyewitnesses who are certain they saw what they could not have seen, and victims passionately certain that they went through ordeals that didn't happen. In this new study, published last summer in the journal Acta Psychologica, Cara Laney and Melanie K.T. Takarangi note that most of the false memories that researchers have implanted have cast people as witnesses or victims. (For example, in the lab people have come to believe falsely that as children they'd spent the night in the hospital or witnessed a bad fight between their parents.) Yet the authors suspected that aggressive people, because of the way they see the world, might be exceptionally prone to false memories in which rather than being victims or witnesses, they were themselves the bad guys.
We all go through life having to make decisions about people and situations without a lot of information—is that a mugger or a guy looking for his house keys? Is this a good place to eat or the Casa Ptomaine? We don't have hours to research each case, so we use the few visible facts we have to predict other, related facts. It's our personal version of Big Data: Just as Netflix cast Kevin Spacey in House of Cards because it knew that viewers who liked the original BBC House of Cards also liked Spacey, so you conclude that the guy with the fedora and the gas-station vintage workshirt is a hipster because you "know" guys who make those fashion choices are guys who like artisanal beers, indie movies and loft bands. The difference is that Netflix is crunching vast amounts of information about behavior to arrive at its net of linked traits, whereas you are kind of winging it with the few data points that you've gathered through personal experience. Nonetheless, the principle is the same—both you and the corporation are treating each fact you encounter as a link in a vast net of associated facts. Pull this one up, these others must follow.
These interwoven chains of association have been called "schemas," "scripts" and "frames," depending on the discipline that used them. All are terms for the knitting together of traits in a way that lets me use some facts to infer other facts—that algorithm that lets me see one slice of cake with a candle in it and two balloons and immediately expect to hear "Happy Birthday" and see presents and a happy if slightly embarrassed person at a table. But that's me. You might differ, because schemas aren't standardized. Everyone builds his/her own out of their own experiences. This means, of course, that the same physical facts call up different schemas in different people. For example, I have a set of associations and expectations when a police officer asks me to hold on for a moment and answer a question. If I had been "stopped and frisked" repeatedly, my schema for this experience would be different, and a lot more negative.
Aggressive people, write Laney and Takarangi, have aggressive schemas. The net of interrelated concepts that they call up often involves expectations of a fight. For example, in an earlier study Takarangi had shown volunteers some words whose interpretation was ambiguous ("cut," "whip" and "mug" could be about violence but they could also refer to a quiet morning the kitchen). When recalling those words, people who had scored high on a measure of aggression tended to add other words that they had not seen—like "hit" and "stab"—which removed the ambiguity and made the list, as they recalled it, even more combative. (Those results hark back to the first experiments that established that people use schemata—they involved British students recalling a Native American folk tale, and adding all manner of "American Indian" details that were not in the original.)
A schema that interprets people and experiences as aggressive can betray its owner, the authors argue. It inclines him to expect aggression not only in others but in himself, and that makes him vulnerable to believing he was aggressive in the past. (I'm using the male pronoun here because men commit more violent crimes than women do, but there were women in this study, and no one is claiming that aggression is confined to one gender.)
In their study Laney and Takarangi worked with 187 undergraduates at the University of Leicester who had filled out a variety of online questionnaires, one of which measured their views of their own aggressiveness. Another questionnaire asked them to say yes or no to 37 statements about their teen-age years (for example, "you cheated on an important test" or "you got a tattoo or piercing"). They were also asked to rate how emotional such an event would have been for them (whether or not it had happened) and, for the yes answers, how confident they were that their memories were accurate.
In the lab a week later, each student received a summary of results derived from the questionnaires—a personality profile and a statement about his or her "behavioral style." Then 101 students in the group read a statement that said "the adolescent experience that has been most influential in shaping this behavioral style is:" followed by one of three items from the list they had seen earlier: (1) "You were punched and got a black eye"; (2) "You punched someone and gave them a black eye"; (3) "You spread malicious gossip about someone." That gave the researchers three groups of students, each group reacting to a different statement.
The key here is that none of these statements about the past was true. The volunteers who really had had those experiences had been filtered out, along with a control group of students who weren't served any untruths. Each student who was given a false statement then answered some more questions, including one about how confident s/he was about the nonexistent event that they had just read about.
Out of those 101 students, 40 assimilated the false memories: five thought they "were punched," 17 thought they had "punched someone" and 18 thought they had "spread malicious gossip." (As the researchers note, false memories of hostile acts were twice as successful as false memories of victimhood.)
Those who accepted a false memory of being aggressive had twice as many true memories of aggressive acts (like "you were a school bully" or "you carried a weapon") than did those who didn't take the bait. Moreover, statistical analysis showed that the volunteers who accepted the false memory scored higher on the measure of aggression than did those who resisted. In fact, write the researchers, "the greater their propensity towards aggression, the more likely that a person will develop an aggressive false memory." Told they had committed hostile acts, the more aggressive people in the experiment simply accepted that they had, fitting that "memory" into their picture of themselves.
The problems of false memories, inaccurate eyewitnesses and untrue confessions have gotten some acknowledgement from the American legal system. Perhaps that legal system should also be worrying about a different and particularly insidious form of false memory: The one in which someone cops to violent acts not because he committed them but because he's seen—and sees himself—as the type of guy who does that sort of thing.
Laney C, & Takarangi MK (2013). False memories for aggressive acts. Acta psychologica, 143 (2), 227-34 PMID: 23639921
Illustration: Detail from A Procession of Flagellants by Goya, via Wikimedia
Follow me on Twitter: @davidberreby
A large new study uses an online game to inoculate people against fake news.
- Researchers from the University of Cambridge use an online game to inoculate people against fake news.
- The study sample included 15,000 players.
- The scientists hope to use such tactics to protect whole societies against disinformation.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
Many governments do not report, or misreport, the numbers of refugees who enter their country.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.