Climate change and the power of story
To spur action on climate change, we need a story of mythical proportions.
How do we make sense of the world? How do we understand ourselves, and our place in the cosmos?
Is it through laws and rules imposed by society? Is it through science and its data-driven, numerical relations explicating nature's behavior, and therefore our behavior too?
While both the formalisms of societal law and the numerically based relationships uncovered by science have their obvious power, deep down I believe the way we truly make sense—create meaning about ourselves and our place in the world—is through story. On the grandest cultural scales that emphasis on story points to what we call Myth. And, I would argue, it's exactly our failure to understand the power of Myth that lies at the root of our decades-long inaction on climate change.
When you hear the word “myth" you're likely to think “a false story," like the urban legends about old ladies microwaving their dogs to dry them off after a bath. But taking a long view of human evolution, myths are often more than just true or false. They are not about facts, and in this way, they've always played an essential role for us.
Every society, in every time and place, has had a system of myths, a constellation of stories that give its people a basic sense of context. There were myths that were intimate and spoke to our internal life as we make our transitions to adulthood, to parenthood, and to old age. But there was also another class of myth that told the “big stories"—things like creation stories where people came to understand how they thought the universe was born, how the Earth was formed, and how people were made.
As I argued in my first book The Constant Fire: Beyond the Science vs. Religion Debate, in the modern age, the creation of these big stories falls to science. Instead of gods and spirits, we now have the Big Bang and evolution by natural selection. Science gave us a way to begin a dialogue with the world based on experiment and evidence. But the power of its stories as stories never went away.
There is a reason why the best science documentaries, whether about the Big Bang or the origin of life, include gorgeous images and sweeping orchestral music. The filmmakers could have just presented the data in graphs and tables, the way scientists often talk among themselves. But such a presentation wouldn't capture our imaginations. It wouldn't let our feelings rise to the necessary level of awe and wonder.
By triggering those kinds of feelings, a story about cosmic evolution gets raised above a mere presentation of evidence. The images and music aren't just fluff; they're a translation. They turn the science into a story by allowing data to become a narrative we both feel and feel part of. They let the science become meaningful for us.
A mythic level narrative
Now let's think about climate change. For years, scientists like myself have been showing people data in the form of graphs of temperature vs. time, maps of sea level rise, and movies of sea-ice changes. You'd think this would be enough, but it hasn't gotten us anywhere near the level of urgency needed to generate quick action.
The failure to generate action on climate change has a number of causes, including well-organized science denial. But I believe its root cause is that we haven't yet translated that data into a “big story," a new mythic level narrative about human beings and our place in Earth's 4-billion-year history of life and the planet co-evolving.
The only thing close to this kind of story is either the “it's not happening" of science denial, or the “we suck" narrative that says we're a plague on the planet. The first story is just a form of sticking our heads in the sand. The second carries the aroma of ancient “flood" myths where humanity is punished by the gods for our sinful behavior. But we know enough about planets and life now to see how that morality tale is both unhelpful and deeply wrong.
In my new book Light of the Stars: Alien Worlds and the Fate of the Earth, I try to outline a new kind of myth that reintegrates us back into the biosphere as one out of a long line of its evolutionary experiments. I'll be writing more about this in future posts, but today I wanted only to focus on the fact that stories matter.
It has been said that stories were the first form of human technology. I believe that's true. Recognizing that fact allows us to begin reframing how we think about science and its place in the quest for not just knowledge, but also for meaning. For without meaning, we are doomed to remain in the dark—no matter how many volumes of facts surround us.
Once a week.
Subscribe to our weekly newsletter.
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
Quantum theory has weird implications. Trying to explain them just makes things weirder.
- The weirdness of quantum theory flies in the face of what we experience in our everyday lives.
- Quantum weirdness quickly created a split in the physics community, each side championed by a giant: Albert Einstein and Niels Bohr.
- As two recent books espousing opposing views show, the debate still rages on nearly a century afterward. Each "resolution" comes with a high price tag.
Albert Einstein and Niels Bohr, two giants of 20th century science, espoused very different worldviews.
To Einstein, the world was ultimately rational. Things had to make sense. They should be quantifiable and expressible through a logical chain of cause-and-effect interactions, from what we experience in our everyday lives all the way to the depths of reality. To Bohr, we had no right to expect any such order or rationality. Nature, at its deepest level, need not follow any of our expectations of well-behaved determinism. Things could be weird and non-deterministic, so long as they became more like what we expect when we traveled from the world of atoms to our world of trees, frogs, and cars. Bohr divided the world into two realms, the familiar classical world, and the unfamiliar quantum world. They should be complementary to one another but with very different properties.
The two scientists spent decades arguing about the impact of quantum physics on the nature of reality. Each had groups of physicists as followers, all of them giants of their own. Einstein's group of quantum weirdness deniers included quantum physics pioneers Max Planck, Louis de Broglie, and Erwin Schrödinger, while Bohr's group had Werner Heisenberg (of uncertainty principle fame), Max Born, Wolfgang Pauli, and Paul Dirac.
Almost a century afterward, the debate rages on.
Einstein vs. Bohr, Redux
Two books — one authored by Sean Carroll and published last fall and another published very recently and authored by Carlo Rovelli — perfectly illustrate how current leading physicists still cannot come to terms with the nature of quantum reality. The opposing positions still echo, albeit with many modern twists and experimental updates, the original Einstein-Bohr debate.
Albert Einstein and Niels Bohr, two giants of 20th century science, espoused very different worldviews.
I summarized the ongoing dispute in my book The Island of Knowledge: Are the equations of quantum physics a computational tool that we use to make sense of the results of experiments (Bohr), or are they supposed to be a realistic representation of quantum reality (Einstein)? In other words, are the equations of quantum theory the way things really are or just a useful map?
Einstein believed that quantum theory, as it stood in the 1930s and 1940s, was an incomplete description of the world of the very small. There had to be an underlying level of reality, still unknown to us, that made sense of all its weirdness. De Broglie and, later, David Bohm, proposed an extension of the quantum theory known as hidden variable theory that tried to fill in the gap. It was a brilliant attempt to appease the urge Einstein and his followers had for an orderly natural world, predictable and reasonable. The price — and every attempt to deal with the problem of figuring out quantum theory has a price tag — was that the entire universe had to participate in determining the behavior of every single electron and all other quantum particles, implicating the existence of a strange cosmic order.
Later, in the 1960s, physicist John Bell proved a theorem that put such ideas to the test. A series of remarkable experiments starting in the 1970s and still ongoing have essentially disproved the de Broglie-Bohm hypothesis, at least if we restrict their ideas to what one would call "reasonable," that is, theories that have local interactions and causes. Omnipresence — what physicists call nonlocality — is a hard pill to swallow in physics.
Credit: Public domain
Yet, the quantum phenomenon of superposition insists on keeping things weird. Here's one way to picture quantum superposition. In a kind of psychedelic dream state, imagine that you had a magical walk-in closet filled with identical shirts, the only difference between them being their color. What's magical about this closet? Well, as you enter this closet, you split into identical copies of yourself, each wearing a shirt of a different color. There is a you wearing a blue shirt, another a red, another a white, etc., all happily coexisting. But as soon as you step out of the closet or someone or something opens the door, only one you emerges, wearing a single shirt. Inside the closet, you are in a superposition state with your other selves. But in the "real" world, the one where others see you, only one copy of you exists, wearing a single shirt. The question is whether the inside superposition of the many yous is as real as the one you that emerges outside.
To Einstein, the world was ultimately rational... To Bohr, we had no right to expect any such order or rationality.
The (modern version of the) Einstein team would say yes. The equations of quantum physics must be taken as the real description of what's going on, and if they predict superposition, so be it. The so-called wave function that describes this superposition is an essential part of physical reality. This point is most dramatically exposed by the many-worlds interpretation of quantum physics, espoused in Carroll's book. For this interpretation, reality is even weirder: the closet has many doors, each to a different universe. Once you step out, all of your copies step out together, each into a parallel universe. So, if I happen to see you wearing a blue shirt in this universe, in another, I'll see you wearing a red one. The price tag for the many-worlds interpretation is to accept the existence of an uncountable number of non-communicating parallel universes that enact all possibilities from a superstition state. In a parallel universe, there was no COVID-19 pandemic. Not too comforting.
Bohm's team would say take things as they are. If you stepped out of the closet and someone saw you wearing a shirt of a given color, then this is the one. Period. The weirdness of your many superposing selves remains hidden in the quantum closet. Rovelli defends his version of this worldview, called relational interpretation, in which events are defined by the interactions between the objects involved, be them observers or not. In this example, the color of your shirt is the property at stake, and when I see it, I am entangled with this specific shirt of yours. It could have been another color, but it wasn't. As Rovelli puts it, "Entanglement… is the manifestation of one object to another, in the course of an interaction, in which the properties of the objects become actual." The price to pay here is to give up the hope of ever truly understanding what goes on in the quantum world. What we measure is what we get and all we can say about it.
What should we believe?
Both Carroll and Rovelli are master expositors of science to the general public, with Rovelli being the more lyrical of the pair.
There is no resolution to be expected, of course. I, for one, am more inclined to Bohr's worldview and thus to Rovelli's, although the interpretation I am most sympathetic to, called QBism, is not properly explained in either book. It is much closer in spirit to Rovelli's, in that relations are essential, but it places the observer on center stage, given that information is what matters in the end. (Although, as Rovelli acknowledges, information is a loaded word.)
We create theories as maps for us human observers to make sense of reality. But in the excitement of research, we tend to forget the simple fact that theories and models are not nature but our representations of nature. Unless we nurture hopes that our theories are really how the world is (the Einstein camp) and not how we humans describe it (the Bohr camp), why should we expect much more than this?
Maybe eyes really are windows into the soul — or at least into the brain, as a new study finds.
- Researchers find a correlation between pupil size and differences in cognitive ability.
- The larger the pupil, the higher the intelligence.
- The explanation for why this happens lies within the brain, but more research is needed.
What can you tell by looking into someone's eyes? You can spot a glint of humor, signs of tiredness, or maybe that they don't like something or someone.
But outside of assessing an emotional state, a person's eyes may also provide clues about their intelligence, suggests new research. A study carried out at the Georgia Institute of Technology shows that pupil size is "closely related" to differences in intelligence between individuals.
The scientists found that larger pupils may be connected to higher intelligence, as demonstrated by tests that gauged reasoning skills, memory, and attention. In fact, the researchers claim that the relationship of intelligence to pupil size is so pronounced, that it came across their previous two studies as well and can be spotted just with your naked eyes, without any additional scientific instruments. You should be able to tell who scored the highest or the lowest on the cognitive tests just by looking at them, say the researchers.
The pupil-IQ link
The connection was first noticed across memory tasks, looking at pupil dilations as signs of mental effort. The studies involved more than 500 people aged 18 to 35 from the Atlanta area. The subjects' pupil sizes were measured by eye trackers, which use a camera and a computer to capture light reflecting off the pupil and cornea. As the scientists explained in Scientific American, pupil diameters range from two to eight millimeters. To determine average pupil size, they took measurements of the pupils at rest when the participants were staring at a blank screen for a few minutes.
Another part of the experiment involved having the subjects take a series of cognitive tests that evaluated "fluid intelligence" (the ability to reason when confronted with new problems), "working memory capacity" (how well people could remember information over time), and "attention control" (the ability to keep focusing attention even while being distracted). An example of the latter involves a test that attempts to divert a person's focus on a disappearing letter by showing a flickering asterisk on another part of the screen. If a person pays too much attention to the asterisk, they might miss the letter.
The conclusions of the research were that having a larger baseline pupil size was related to greater fluid intelligence, having more attention control, and even greater working memory capacity, although to a smaller extent. In an email exchange with Big Think, author Jason Tsukahara pointed out, "It is important to consider that what we find is a correlation — which should not be confused with causation."
The researchers also found that pupil size seemed to decrease with age. Older people had more constricted pupils but when the scientists standardized for age, the pupil-size-to-intelligence connection still remained.
Why are pupils linked to intelligence?
The connection between pupil size and IQ likely resides within the brain. Pupil size has been previously connected to the locus coeruleus, a part of the brain that's responsible for synthesizing the hormone and neurotransmitter norepinephrine (noradrenaline), which mobilizes the brain and body for action. Activity in the locus coeruleus affects our perception, attention, memory, and learning processes.
As the authors explain, this region of the brain "also helps maintain a healthy organization of brain activity so that distant brain regions can work together to accomplish challenging tasks and goals." Because it is so important, loss of function in the locus coeruleus has been linked to conditions like Alzheimer's disease, Parkinson's, clinical depression, and attention deficit hyperactivity disorder (ADHD).
The researchers hypothesize that people who have larger pupils while in a restful state, like staring at a blank computer screen, have "greater regulation of activity by the locus coeruleus." This leads to better cognitive performance. More research is necessary, however, to truly understand why having larger pupils is related to higher intelligence.
In an email to Big Think, Tsukahara shared, "If I had to speculate, I would say that it is people with greater fluid intelligence that develop larger pupils, but again at this point we only have correlational data."
Do other scientists believe this?
As the scientists point out in the beginning of their paper, their conclusions are controversial and, so far, other researchers haven't been able to duplicate their results. The research team addresses this criticism by explaining that other studies had methodological issues and examined only memory capacity but not fluid intelligence, which is what they measured.