Your Evolved Brain Is at the Mercy of Your Reptilian Impulses—and Vice Versa
You have three types of brain inside your brain. And they're all fighting for dominance.
Robert M. Sapolsky holds degrees from Harvard and Rockefeller Universities and is currently a Professor of Biology and Neurology at Stanford University and a Research Associate with the Institute of Primate Research, National Museums of Kenya. His most recent book is Behave: The Biology of Humans at Our Best and Worst.
ROBERT SAPOLSKY: What’s the best way to think about the brain? It’s insanely complicated. Everything connects to everything. A gazillion little subregions.
Amid all that complexity there’s a broadly sort of simplifying way to sort of think about aspects of brain function when it comes to behavior. And this was an idea put forth by this guy Paul MacLean, a grand poohbah on the field, conceptually of thinking of the brain as coming in three functional layers.
The triune brain—and again this is highly schematic—the brain really doesn’t come in three layers, but one could think of the first most, the bottom most, the most ancient as being what’s often termed the “reptilian brain,” where basically the parts in there, we’ve got the same wiring as in a lizard, as in any ancient creature. It’s been there forever—ancient, ancient wiring at the base of the brain, most inside. And what does that region do? All the regulatory stuff. Your body temperature changes, it senses it and causes you to sweat or shiver. It’s monitoring your blood glucose levels. It’s like releasing hormones that are essential to sort of everyday shop keeping. It’s just keeping regulatory stuff in balance.
Sitting on top of that is conceptually what could be termed the limbic system, the emotional part of the brain. And this is very much a mammalian specialty. Lizards are not well known for their emotional lives. Part of the brain having to do with fear, arousal, anxieties, sexual longings, all those sorts of things – very mammalian. You’re off there in the grasslands butting heads with somebody else with antlers, and its your limbic system that’s heavily involved in that.
Then sitting on the top is the layer three, the cortex. The cortex, spanking new, most recently evolved part of the brain. Everybody’s got a little bit of cortex but it’s not until you get to primates that you’ve got tons, and then apes, and then us. So functionally it’s very easy to think of this simplistic flow of commands. Layer two, the limbic system, can make layer one, the reptilian brain, activate. When is that? Your heart beats faster not because of a regulatory reptilian thing—Ooh, you’ve been caught in something painful but oh, an emotional state. You’re a wildebeest and they’re some scary menacing wildebeest threatening you and that emotional state causes your limbic system to activate the reptilian brain and your heart beats faster. You have a stress response. Not because a regulatory change happened in your body but for an emotional reason.
Then it’s very easy to think of, layered on top, this cortical area commanding your second layer, your limbic system to have an emotional response rather than something emotional: Here’s a threatening beast right in front of you. Something emotional. You see a movie that’s emotionally upsetting. See a movie. These are not real characters. They’re pixels and it’s your cortex that’s turning that abstract cognitive state into an emotional response.
Likewise your cortex, layer three, could influence events down in layer one. You see something upsetting, you think about - you think about mortality, “One of these days my heart is going to stop beating,” and your heart’s going to beat faster. You’re not bleeding and hypotensive. You’re not in some—nothing in the reptilian brain could make sense of this. A purely cognitive state: “Ooh, on the other side of the planet there are people undergoing some traumatic event and I feel upset about it,” and your reptilian brain responds.
So it’s very easy given that to think of a “three talks to two talks to one” sort of scenario.
Just as readily though, one talks to two talks to three. What would be a case of that?
What’s your reptilian brain talking to your cortex? Remarkable finding: When we’re hungry we make harsher moral judgments about people’s transgressions. We’re less charitable. We cheat more in economic games. Our cortex assessing the effects of prosociality, antisociality and altruism and its evolution.
And part of what it’s doing in deciding how it feels about somebody else’s plight is if your stomach is gurgling, if you’re hungry, if you’re in pain that affects very cortical judgment-type areas.
Layer one, this ancient reptilian brain that should have nothing to do with how your cortex works, having tons to do with it. Or layer two influencing layer three, your limbic system, your emotional state influencing your abstract cognitive processes. What’s the most obvious example of it? When we’re under stress. When we’re in an emotionally aroused state. We make stupid impulsive decisions that seem brilliant at the time. Affective emotional limbic layer two influencing how your cortical cortex goes about its abstractions. It’s not that abstract. It’s embedded in the biology of all these layers.
So this interaction between these layers it seems this very mechanical process, potentially even an unconscious one. How do we consciously have, say, our cortex regulate an emotion, a limbic two layer? Simple. Think about the most arousing, wonderful thing that ever happened to you umpteen decades ago, and your cortex is evoking a memory that’s got your limbic system humming along in some excited state. Or think about—think cortically, or pull out the memories of—some traumatic event and your limbic system is responding.
How about reptilian level? Easy to make it get into an agitated state. Sit there and think about, think something incredibly upsetting. Think about some memory that was truly disturbing. Think about mortality. Think about global warming. And your heart speeds up. Layer three in a very conscious way has mobilized layer one. A lot harder is the inverse: You’re sitting there and you suffer from high blood pressure and either they could marinate you in antihypertensive drugs for the rest of your life or an alternative approach, a biofeedback approach is sit there and think about the happiest day of your life. Think about being in an open field that’s beautiful. Think about your favorite vacation. Think about, think about. And if it’s the right “thinking about,” suddenly your heart slows down. Suddenly your blood pressure goes down. Ah, the core of biofeedback is figuring out what sort of conscious states you can evoke that will affect your reptilian brain in a direction that’s good for your health. And all you do then is learn how to get better and better in some stressed hypertensive circumstance. What conscious act of thinking can I mobilize here at this point that will cause changes in how my big toe’s blood flow is working? And a case like that, that is very conscious regulation of more autonomic, more ancient parts of your brain.
You have three brains—the triune, the limbic, and the cortex—and they're all fighting for dominance as you go about your life. The so-called lizard brain (the triune) is perhaps the one we tend to think of as instinctual and gives us our basic instincts like, for example, staying alive or not touching fire. The limbic brain controls our emotions like fear and desire, while our cortex gives us the knowledge that makes us human. Basically, the three brains talk to one another and vie for rank in certain situations... it's sort of like Three's Company except with brain systems. For instance: you're reminded of something sad by your cortex and it triggers your limbic system, or you get cut off in traffic your lizard brain can trigger the cortex and the limbic. It is a pretty fascinating subject, and Robert Saplosky waxes poetic about the three distinct "characters" that live up inside your head.
Robert Sapolsky's most recent book is Behave: The Biology of Humans at Our Best and Worst.
The team caught a glimpse of a process that takes 18,000,000,000,000,000,000,000 years.
- In Italy, a team of scientists is using a highly sophisticated detector to hunt for dark matter.
- The team observed an ultra-rare particle interaction that reveals the half-life of a xenon-124 atom to be 18 sextillion years.
- The half-life of a process is how long it takes for half of the radioactive nuclei present in a sample to decay.
Researchers hope the technology will further our understanding of the brain, but lawmakers may not be ready for the ethical challenges.
- Researchers at the Yale School of Medicine successfully restored some functions to pig brains that had been dead for hours.
- They hope the technology will advance our understanding of the brain, potentially developing new treatments for debilitating diseases and disorders.
- The research raises many ethical questions and puts to the test our current understanding of death.
The image of an undead brain coming back to live again is the stuff of science fiction. Not just any science fiction, specifically B-grade sci fi. What instantly springs to mind is the black-and-white horrors of films like Fiend Without a Face. Bad acting. Plastic monstrosities. Visible strings. And a spinal cord that, for some reason, is also a tentacle?
But like any good science fiction, it's only a matter of time before some manner of it seeps into our reality. This week's Nature published the findings of researchers who managed to restore function to pigs' brains that were clinically dead. At least, what we once thought of as dead.
What's dead may never die, it seems
The researchers did not hail from House Greyjoy — "What is dead may never die" — but came largely from the Yale School of Medicine. They connected 32 pig brains to a system called BrainEx. BrainEx is an artificial perfusion system — that is, a system that takes over the functions normally regulated by the organ. The pigs had been killed four hours earlier at a U.S. Department of Agriculture slaughterhouse; their brains completely removed from the skulls.
BrainEx pumped an experiment solution into the brain that essentially mimic blood flow. It brought oxygen and nutrients to the tissues, giving brain cells the resources to begin many normal functions. The cells began consuming and metabolizing sugars. The brains' immune systems kicked in. Neuron samples could carry an electrical signal. Some brain cells even responded to drugs.
The researchers have managed to keep some brains alive for up to 36 hours, and currently do not know if BrainEx can have sustained the brains longer. "It is conceivable we are just preventing the inevitable, and the brain won't be able to recover," said Nenad Sestan, Yale neuroscientist and the lead researcher.
As a control, other brains received either a fake solution or no solution at all. None revived brain activity and deteriorated as normal.
The researchers hope the technology can enhance our ability to study the brain and its cellular functions. One of the main avenues of such studies would be brain disorders and diseases. This could point the way to developing new of treatments for the likes of brain injuries, Alzheimer's, Huntington's, and neurodegenerative conditions.
"This is an extraordinary and very promising breakthrough for neuroscience. It immediately offers a much better model for studying the human brain, which is extraordinarily important, given the vast amount of human suffering from diseases of the mind [and] brain," Nita Farahany, the bioethicists at the Duke University School of Law who wrote the study's commentary, told National Geographic.
An ethical gray matter
Before anyone gets an Island of Dr. Moreau vibe, it's worth noting that the brains did not approach neural activity anywhere near consciousness.
The BrainEx solution contained chemicals that prevented neurons from firing. To be extra cautious, the researchers also monitored the brains for any such activity and were prepared to administer an anesthetic should they have seen signs of consciousness.
Even so, the research signals a massive debate to come regarding medical ethics and our definition of death.
Most countries define death, clinically speaking, as the irreversible loss of brain or circulatory function. This definition was already at odds with some folk- and value-centric understandings, but where do we go if it becomes possible to reverse clinical death with artificial perfusion?
"This is wild," Jonathan Moreno, a bioethicist at the University of Pennsylvania, told the New York Times. "If ever there was an issue that merited big public deliberation on the ethics of science and medicine, this is one."
One possible consequence involves organ donations. Some European countries require emergency responders to use a process that preserves organs when they cannot resuscitate a person. They continue to pump blood throughout the body, but use a "thoracic aortic occlusion balloon" to prevent that blood from reaching the brain.
The system is already controversial because it raises concerns about what caused the patient's death. But what happens when brain death becomes readily reversible? Stuart Younger, a bioethicist at Case Western Reserve University, told Nature that if BrainEx were to become widely available, it could shrink the pool of eligible donors.
"There's a potential conflict here between the interests of potential donors — who might not even be donors — and people who are waiting for organs," he said.
It will be a while before such experiments go anywhere near human subjects. A more immediate ethical question relates to how such experiments harm animal subjects.
Ethical review boards evaluate research protocols and can reject any that causes undue pain, suffering, or distress. Since dead animals feel no pain, suffer no trauma, they are typically approved as subjects. But how do such boards make a judgement regarding the suffering of a "cellularly active" brain? The distress of a partially alive brain?
The dilemma is unprecedented.
Setting new boundaries
Another science fiction story that comes to mind when discussing this story is, of course, Frankenstein. As Farahany told National Geographic: "It is definitely has [sic] a good science-fiction element to it, and it is restoring cellular function where we previously thought impossible. But to have Frankenstein, you need some degree of consciousness, some 'there' there. [The researchers] did not recover any form of consciousness in this study, and it is still unclear if we ever could. But we are one step closer to that possibility."
She's right. The researchers undertook their research for the betterment of humanity, and we may one day reap some unimaginable medical benefits from it. The ethical questions, however, remain as unsettling as the stories they remind us of.
One victim can break our hearts. Remember the image of the young Syrian boy discovered dead on a beach in Turkey in 2015? Donations to relief agencies soared after that image went viral. However, we feel less compassion as the number of victims grows. Are we incapable of feeling compassion for large groups of people who suffer a tragedy, such as an earthquake or the recent Sri Lanka Easter bombings? Of course not, but the truth is we aren't as compassionate as we'd like to believe, because of a paradox of large numbers. Why is this?
Compassion is a product of our sociality as primates. In his book, The Expanding Circle: Ethics, Evolution, and Moral Progress, Peter Singer states, "Human beings are social animals. We were social before we were human." Mr. Singer goes on to say, "We can be sure that we restrained our behavior toward our fellows before we were rational human beings. Social life requires some degree of restraint. A social grouping cannot stay together if its members make frequent and unrestrained attacks on one another."
Attacks on ingroups can come from forces of nature as well. In this light, compassion is a form of expressed empathy to demonstrate camaraderie.
Yet even after hundreds of centuries of evolution, when tragedy strikes beyond our community, our compassion wanes as the number of displaced, injured, and dead mounts.
The drop-off in commiseration has been termed the collapse of compassion. The term has also been defined in The Oxford Handbook of Compassion Science: ". . . people tend to feel and act less compassionately for multiple suffering victims than for a single suffering victim."
That the drop-off happens has been widely documented, but at what point this phenomenon happens remains unclear. One paper, written by Paul Slovic and Daniel Västfjäll, sets out a simple formula, ". . . where the emotion or affective feeling is greatest at N =1 but begins to fade at N = 2 and collapses at some higher value of N that becomes simply 'a statistic.'"
The ambiguity of "some higher value" is curious. That value may relate to Dunbar's Number, a theory developed by British anthropologist, Robin Dunbar. His research centers on communal groups of primates that evolved to support and care for larger and larger groups as their brains (our brains) expanded in capacity. Dunbar's is the number of people with whom we can maintain a stable relationship — approximately 150.
Some back story
Professor Robin Dunbar of the University of Oxford has published considerable research on anthropology and evolutionary psychology. His work is informed by anthropology, sociology and psychology. Dunbar's Number is a cognitive boundary, one we are likely incapable of breaching. The number is based around two notions; that brain size in primates correlates with the size of the social groups they live among and that these groups in human primates are relative to communal numbers set deep in our evolutionary past. In simpler terms, 150 is about the maximum number of people with whom we can identify with, interact with, care about, and work to protect. Dunbar's Number falls along a logorithmic continuum, beginning with the smallest, most emotionally connected group of five, then expanding outward in multiples of three: 5, 15, 50, 150. The numbers in these concentric circles are affected by multiple variables, including the closeness and size of immediate and extended families, along with the greater cognitive capacity of some individuals to maintain stable relationships with larger than normal group sizes. In other words, folks with more cerebral candlepower can engage with larger groups. Those with lesser cognitive powers, smaller groups.
The number that triggers "compassion collapse" might be different for individuals, but I think it may begin to unravel along the continuum of Dunbar's relatable 150. We can commiserate with 5 to 15 to 150 people because upon those numbers, we can overlay names and faces of people we know: our families, friends and coworkers, the members of our clan. In addition, from an evolutionary perspective, that number is important. We needed to care if bands of our clan were being harmed by raids, disaster, or disease, because our survival depended on the group staying intact. Our brains developed the capacity to care for the entirety of the group but not beyond it. Beyond our ingroup was an outgroup that may have competed with us for food and safety and it served us no practical purpose to feel sad that something awful had happened to them, only to learn the lessons so as to apply them for our own survival, e.g., don't swim with hippos.
Imagine losing 10 family members in a house fire. Now instead, lose 10 neighbors, 10 from a nearby town, 10 from Belgium, 10 from Vietnam 10 years ago. One could almost feel the emotion ebbing as the sentence drew to a close.
There are two other important factors which contribute to the softening of our compassion: proximity and time. While enjoying lunch in Santa Fe, we can discuss the death toll in the French revolution with no emotional response but might be nauseated to discuss three children lost in a recent car crash around the corner. Conflict journalists attempt to bridge these geotemporal lapses but have long struggled to ignite compassion in their home audience for far-flung tragedies, Being a witness to carnage is an immense stressor, but the impact diminishes across the airwaves as the kilometers pile up.
A Dunbar Correlation
Where is the inflection point at which people become statistics? Can we find that number? In what way might that inflection point be influenced by the Dunbar 150?
"Yes, the Dunbar number seems relevant here," said Gad Saad, PhD., the evolutionary behavioral scientist from the John Molson School of Business at Concordia University, Montreal, in an email correspondence. Saad also recommended Singer's work.
I also went to the wellspring. I asked Professor Dunbar by email if he thought 150 was a reasonable inflection point for moving from compassion into statistics. He graciously responded, lightly edited for space.
Professor Dunbar's response:
"The short answer is that I have no idea, but what you suggest is perfect sense. . . . One-hundred and fifty is the inflection point between the individuals we can empathize with because we have personal relationships with them and those with whom we don't have personalized relationships. There is, however, also another inflection point at 1,500 (the typical size of tribes in hunter-gatherer societies) which defines the limit set by the number of faces we can put names to. After 1,500, they are all completely anonymous."
I asked Dunbar if he knows of or suspects a neurophysiological aspect to the point where we simply lose the capacity to manage our compassion:
"These limits are underpinned by the size of key bits of the brain (mainly the frontal lobes, but not wholly). There are a number of studies showing this, both across primate species and within humans."
In his literature, Professor Dunbar presents two reasons why his number stands at 150, despite the ubiquity of social networking: the first is time — investing our time in a relationship is limited by the number of hours we have available to us in a given week. The second is our brain capacity measured in primates by our brain volume.
Friendship, kinship and limitations
"We devote around 40 percent of our available social time to our 5 most intimate friends and relations," Dunbar has written, "(the subset of individuals on whom we rely the most) and the remaining 60 percent in progressively decreasing amounts to the other 145."
These brain functions are costly, in terms of time, energy and emotion. Dunbar states, "There is extensive evidence, for example, to suggest that network size has significant effects on health and well-being, including morbidity and mortality, recovery from illness, cognitive function, and even willingness to adopt healthy lifestyles." This suggests that we devote so much energy to our own network that caring about a larger number may be too demanding.
"These differences in functionality may well reflect the role of mentalizing competencies. The optimal group size for a task may depend on the extent to which the group members have to be able to empathize with the beliefs and intentions of other members so as to coordinate closely…" This neocortical-to-community model carries over to compassion for others, whether in or out of our social network. Time constrains all human activity, including time to feel.
As Dunbar writes in The Anatomy of Friendship, "Friendship is the single most important factor influencing our health, well-being, and happiness. Creating and maintaining friendships is, however, extremely costly, in terms of both the time that has to be invested and the cognitive mechanisms that underpin them. Nonetheless, personal social networks exhibit many constancies, notably in their size and their hierarchical structuring." Our mental capacity may be the primary reason we feel less empathy and compassion for larger groups; we simply don't have the cerebral apparatus to manage their plights. "Part of friendship is the act of mentalizing, or mentally envisioning the landscape of another's mind. Cognitively, this process is extraordinarily taxing, and as such, intimate conversations seem to be capped at about four people before they break down and form smaller conversational groups. If the conversation involves speculating about an absent person's mental state (e.g., gossiping), then the cap is three — which is also a number that Shakespeare's plays respect."
We cannot mentalize what is going on in the minds of people in our groups much beyond our inner circle, so it stands to reason we cannot do it for large groups separated from us by geotemporal lapses.
In a paper, C. Daryl Cameron and Keith B. Payne state, "Some researchers have suggested that [compassion collapse] happens because emotions are not triggered by aggregates. We provide evidence for an alternative account. People expect the needs of large groups to be potentially overwhelming, and, as a result, they engage in emotion regulation to prevent themselves from experiencing overwhelming levels of emotion. Because groups are more likely than individuals to elicit emotion regulation, people feel less for groups than for individuals."
This argument seems to imply that we have more control over diminishing compassion than not. To say, "people expect the needs of large groups to be potentially overwhelming" suggests we consciously consider what that caring could entail and back away from it, or that we become aware that we are reaching and an endpoint of compassion and begin to purposely shift the framing of the incident from one that is personal to one that is statistical. The authors offer an alternative hypothesis to the notion that emotions are not triggered by aggregates, by attempting to show that we regulate our emotional response as the number of victims becomes perceived to be overwhelming. However, in the real world, for example, large death tolls are not brought to us one victim at a time. We are told, about a devastating event, then react viscerally.
If we don't begin to express our emotions consciously, then the process must be subconscious, and that number could have evolved to where it is now innate.
Gray matter matters
One of Dunbar's most salient points is that brain capacity influences social networks. In his paper, The Social Brain, he writes: "Path analysis suggests that there is a specific causal relationship in which the volume of a key prefrontal cortex subregion (or subregions) determines an individual's mentalizing skills, and these skills in turn determine the size of his or her social network."
It's not only the size of the brain but in fact, mentalizing recruits different regions for ingroup empathy. The Stanford Center for Compassion and Altruism Research and Education published a study of the brain regions activated when showing empathy for strangers in which the authors stated, "Interestingly, in brain imaging studies of mentalizing, participants recruit more dorsal portions of the medial prefrontal cortex (dMPFC; BA 8/9) when mentalizing about strangers, whereas they recruit more ventral regions of the medial prefrontal cortex (BA 10), similar to the MPFC activation reported in the current study, when mentalizing about close others with whom participants experience self-other overlap."⁷
It's possible the region of the brain that activates to help an ingroup member evolved for good reason, survival of the group. Other regions may have begun to expand as those smaller tribal groups expanded into larger societies.
There is an eclectic list of reasons why compassion may collapse, irrespective of sheer numbers:
(1) Manner: How the news is presented affects viewer framing. In her book, European Foreign Conflict Reporting: A Comparative Analysis of Public News, Emma Heywood explores how tragedies and war are offered to the viewers, which can elicit greater or lesser compassionate responses. "Techniques, which could raise compassion amongst the viewers, and which prevail on New at Ten, are disregarded, allowing the victims to remain unfamiliar and dissociated from the viewer. This approach does not encourage viewers to engage with the sufferers, rather releases them from any responsibility to participate emotionally. Instead compassion values are sidelined and potential opportunities to dwell on victim coverage are replaced by images of fighting and violence."
(2) Ethnicity. How relatable are the victims? Although it can be argued that people in western countries would feel a lesser degree of compassion for victims of a bombing in Karachi, that doesn't mean people in countries near Pakistan wouldn't feel compassion for the Karachi victims at a level comparable to what westerners might feel about a bombing in Toronto. Distance has a role to play in this dynamic as much as in the sound evolutionary data that demonstrate a need for us to both recognize and empathize with people who look like our communal entity. It's not racism; it's tribalism. We are simply not evolved from massive heterogeneous cultures. As evolving humans, we're still working it all out. It's a survival mechanism that developed over millennia that we now struggle with as we fine tune our trust for others.
In the end
Think of compassion collapse on a grid, with compassion represented in the Y axis and the number of victims running along the X. As the number of victims increases beyond one, our level of compassion is expected to rise. Setting aside other variables that may raise compassion (proximity, familiarity etc.), the level continues to rise until, for some reason, it begins to fall precipitously.
Is it because we've become aware of being overwhelmed or because we have reached max-capacity neuron load? Dunbar's Number seems a reasonable place to look for a tipping point.
Professor Dunbar has referred to the limits of friendship as a "budgeting problem." We simply don't have the time to manage a bigger group of friends. Our compassion for the plight of strangers may drop of at a number equivalent to the number of people with who we can be friends, a number to which we unconsciously relate. Whether or not we solve this intellectual question, it remains a curious fact that the larger a tragedy is, the more likely human faces are to become faceless numbers.
SMARTER FASTER trademarks owned by The Big Think, Inc. All rights reserved.