Le problème de chariot. El problema carro. The trolley problem. A moral choice in another language is NOT the same.
A few thoughts on your lazy brain. But just a few because, well, you know, the brain likes things nice and easy.
The brain normally operates on what psychologists have come to call System One, a principally subconscious, faster, and more instinctive way of processing information and figuring things out. System One relies mostly on feelings and a toolkit of hidden mental shortcuts to help us sense our way through the choices we make, rather than thinking about each one methodically and consciously. System Two refers to the cognitive processes that kick in when we stop and think and purposefully pay attention.
But that takes time, and calories, and since we usually don’t have all the time we’d need to carefully think things through, and since the brain is pound-for-pound the most calorically hungry part of the body (it uses 20-25% of the calories we burn in an average day), and since sometimes survival requires really fast decisions and as the brain was evolving we couldn’t be sure of our next meal, human cognition has developed to mostly run on the faster, easier, and more energy efficient System One.
(If you want to learn more, this is all known as the Dual Process model of cognition, first proposed by philosopher and psychologist William James. Keith Stanovich and Richard West are credited with the “System One – System Two” labels that have been adopted as the lead characters in Daniel Kahneman’s masterwork, Thinking, Fast and Slow.)
Except for when we force ourselves to override this default and stop and think, we don’t consciously choose which of these two components of cognition to use at any given moment or for any specific task. The task at hand subconsciously challenges one system or the other to help figure things out. (It’s actually not as simple as ‘either/or’. Cognition is almost always a combination of both ‘systems’.) But depending on which one is more active, we make either more instinctive and emotional choices (System One), or more coldly analytical ones (System Two). That obviously has profound consequences, as illustrated by an intriguing study by Albert Costa and colleagues that demonstrates how this shapes the moral choices we make.
Costa posed the classic Trolley Problem to study subjects. This is the one where you’re asked “What would you do if you were on a bridge and a trolley is coming and is about to kill five people that you see standing on the tracks but if you throw a switch you can divert it onto a track where it will only kill that one person you see standing on the siding?” Most people throw the switch. But the second part of the conundrum gets stickier, asking “What would you do if you were on a bridge and a trolley is coming and is about to kill five people you see standing on the tracks, but there’s a fat person standing next to you and if you push him off the bridge he will be killed but he will stop the trolley and save the five people?” It’s obviously emotionally tougher to push a real live person to his death than kill someone by pulling a mechanical switch. Far fewer people push the fat man, though quantitatively, the choice is identical.
Costa posted the Trolley Problem to his subjects, all of whom were bilingual. Half read the question in their native language and half read it in the other language they knew, which they knew well enough to speak and read, but not fluently. (Subjects included native speakers of English, Korean, Spanish, French, and Hebrew). Of the people who faced the Trolley Problem choice in their native language, 20%, one person in five, said they’d push the fat man to his death. But more of those who got the challenge in their non-native language, 33%, or one in three, said they’d push the fat guy off the bridge.
Remember, the choices are numerically identical; kill one to save five. So pourquoi la difference, por qué la diferencia, 왜 차이, מדוע ההבדל? Apparently, speculates Dr. Costa, because the subjects reading a foreign language had to translate it, which required activation of the more analytical System Two, while those reading the challenge in their native tongue could remain in the more instinctive and emotion-based default System One mode. The System One people made the choice based more on their feelings, while those relying more on analytical System Two could more clearly see the fact that the choices were numerically the same.
This is fascinating, and scary, because this is what’s going on in your brain and mine all the time, not just when we face moral choices but at every moment our brains are interpreting information to make sense of the world. From stimuli as simple as what we see or hear or smell or taste, to things as complex as the choices we face about relationships or personal safety or where we stand on questions of values, the brain is sorting things out and shaping our perceptions of the world, and our choices and judgments and feelings and behaviors, based on processes that are either more emotional and instinctive or more analytical and ‘rational’, and we have little say…we have limited free will…over which of these cognitive systems is in control.
We can stop and think about things carefully, and our decisions will be wiser and healthier if we do. But mostly we don’t. It’s like Ambrose Bierce suggested in the Devil’s Dictionary, the brain is only the organ with which we think we think.
Think about THAT!
(By the way, if you don’t want to worry about being pushed in front of a train to save others, East Asia is the place to be. None of the native or bilingual Korean speakers pushed the fat man off the bridge, a response that Costa et. al. report is generally true of East Asians in these sorts of moral tests.)
Upvote/downvote each of the videos below!
As you vote, keep in mind that we are looking for a winner with the most engaging social venture pitch - an idea you would want to invest in.
A new method promises to capture an elusive dark world particle.
- Scientists working on the Large Hadron Collider (LHC) devised a method for trapping dark matter particles.
- Dark matter is estimated to take up 26.8% of all matter in the Universe.
- The researchers will be able to try their approach in 2021, when the LHC goes back online.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
- As a stand-up comedian, Pete Holmes knows how words can manipulate audiences — for good and bad.
- Words aren't just words. They stich together our social fabric, helping establish and maintain relationships.
- Holmes has a clever linguistic exercise meant to bring you closer to the people around you.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.