Once a week.
Subscribe to our weekly newsletter.
Who's More Likely to Be Right: A Century of Economics Or A Billion Years of Evolution?
Advocates of nuclear power say the rational choice is to keep licensing those reactors, despite the ongoing crisis in Japan. But a healthy fear of nukes might just be evolutionarily motivated.
Advocates of nuclear power have been busy this week, casting choices about reactors as a battle of head versus heart: Emotionally, we're scared and impressed by the ongoing nuclear crisis in Japan, they say, but the rational choice for the future is to keep licensing those reactors.
As I mentioned the other day, behavioral economists, in describing human biases and blind spots, aren't rebelling against this style of argument. In fact, they often implicitly endorse it. The point of Dan Ariely's Predictably Irrational, like that of Nudge by Cass Sunstein and Richard Thaler, is that we make mistakes and fail to do the right thing—the rational thing—because of the innate biases of the mind.
The Japanese crisis has got me wondering about this rhetoric. Here's why: When we say that people make "mistakes" and "fail to do the right thing" in their choices, we're saying, in essence, that they're failing to think like economists. And economists have been honing their disciplined and logical methods for more than a century, so they deserve respect. However, the sources of our "biases" and "errors" are strategies for dealing with the world that evolved over for more than a billion years. That too deserves respect.
Take a form of illogic in choice that is easy to demonstrate in people: Imagine that you have a stark choice tonight about dinner. You can eat a fabulous and nutritionally virtuous meal in a loud, noisy, rather frightening dive of restaurant. Or you can have a merely OK dinner in a much less stressful place down the road. To many, it feels like a toss-up.
However, if there is a third option that's even less appealing—so-so ambiance, really lousy food—it people's decisions fall in a different pattern. With a worse alternative available, the merely OK option looks better, and most people will choose that one. This is not rational, because the objective value to you of the first two choices has not changed. But absolute value isn't in our normal decision algorithm. Instead, we rate each option based on its relative value to the others.
Humans share this decision-making bias with insects, birds and monkeys, hinting that it arose in a common ancestor and served well enough to survive eons of natural selection. Last summer, in fact, Tanya Latty and Madeleine Beekman showed that even slime molds have this tendency to see value in comparative, not absolute, terms. (In their experiment, the richest food was bathed in bright light, which is dangerous to the species, Physarum polycephalum, while a less concentrated dollop of oatmeal, in a dark, mold-friendly place, was Option 2. With only two choices, the slime molds didn't show a strong preference for either. But when a third, nutritionally poor choice was added, they greatly preferred Option 2.)
The ever-handy Timetree website tells me that the ancestors of Physarum and humans diverged 1.2 billion years ago. So if you argue that the "relativity heuristic" causes people to make errors, you're arguing that the last two centuries of economic theory are a better guide than the last billion or so years of evolution. And I think that argument is worth hearing. But I don't see why I should assume it's true. Isn't it possible that sometimes our evolved heuristics are smarter than economists?
The other day I mentioned a frequently cited example of successful nudging, based on a post-rationalist understanding of the mind: Workers are more likely to participate in a 401(k) program if they're automatically enrolled and have to opt out, than they would if they instead have to opt in. So switching from opt-in to opt-out 401(k) plans seems like a worthy and sensible policy, and Congress changed the law to encourage this in 2006. Let's help those irrational workers overcome their natural tendency to error, right?
Since 2006, though, stocks plummeted and many companies stopped matching employee contributions to these retirement plans. As David K. Randall explains here, in recent years, many workers who went with their irrational biases might end up better off.
So I wonder, now, if people with an irrational fear of nukes—a fear that can't be assuaged by the confident predictions of experts—might not be making a better choice than people who deliberately, maturely and rationally reason their way to accepting the absolute value of nuclear power. The rational argument for nuclear power is that it is hands-down the least destructive way we can generate energy in the amounts we demand. The irrational fears about it are based on the fact that something just went badly wrong with it; and that accidents, while rare, do a lot of damage; and that people, we know, tend to lie, cover up and slip up in their imperfect real lives. I think it's worth considering whether those fears might not be a better guide.
Post-rationalist researchers are sometimes accused of devaluing reason, but the ones I have read do the opposite: They (ahem) irrationally overestimate reason's powers. They think it can correct the mind's tendency to "mistakes." But reasoning doesn't always lead us right.
The problem isn't that logic is flawed. It's that we easily attribute the soundness of logic to the assumptions on which that logic rests. And that's a mistake.
We can reason our way out of that error with difficulty. Or we can listen to the "biases" evolution has bequeathed us. Biases that tell us to be very impressed by recent, rare, frightening events, whatever the credentialed experts say. Both paths may lead to the same end. But the latter path is faster and more convincing.
Maybe the goal of a post-rational model of the mind should not be to "nudge" ourselves into being more rational, but rather to find a better balance between the reasoning and unreasoning parts of the mind. If reason is good for correcting the errors of our innate heuristics, it may also be true that those innate biases may be good for correcting the errors of reason.
Latty, T., & Beekman, M. (2010). Irrational decision-making in an amoeboid organism: transitivity and context-dependent preferences Proceedings of the Royal Society B: Biological Sciences, 278 (1703), 307-312 DOI: 10.1098/rspb.2010.1045
While legalization has benefits, a new study suggests it may have one big drawback.
- A new study finds that rates of marijuana use and addiction have gone up in states that have recently legalized the drug.
- The problem was most severe for those over age of 26, with cases of addiction rising by a third.
- The findings complicate the debate around legalization.
Cannabis Use Disorder, is that when you get so high you can’t figure out how to smoke anymore?
Cannabis use disorder, also known as CUD or cannabis/marijuana addiction, is a psychological disorder described in DSM 5 as "the continued use of cannabis despite clinically significant impairment." This includes people being unable to cut down on their usage despite wanting to, those who often use it despite finding it severely impairs their ability to function, or those who are putting themselves in danger to secure access to the drug.
While an understanding that marijuana can be addictive has existed for some time, and the image of the pothead who smokes so much they can hardly function is prevalent in our society, the effects of legalization on addiction rates have somehow gone understudied until now. Importantly, previous studies had failed to consider usage rates amongst populations over the age of 25.
In the new study, published in JAMA Psychiatry, focused on self-reported data on monthly drug use in four states where marijuana is now legal, Colorado, Washington, Alaska, and Oregon, from both before and after the drug was legalized in each state and compared it to others which have not yet legalized.
The data gave insights into the drug use habits of the respondents and specifically gave information about if they had smoked at all in the last month, the frequency of their drug use, and if they had ever had issues with how much they were using drugs.The researchers ultimately considered the responses of 505,796 individuals.
The increase in cannabis usage they found was considerable. The number of respondents over the age of 26 who claimed to have used the drug in the last month went up by 23% compared with their counterparts in states that have yet to legalize. Abuse of the drug by this group rose by 37%.
Teen usage rose by 25%, and addiction rates rose as well. This increase was small, though, and the authors have suggested it may be due to an unknown factor. The rate of usage or abuse for respondents between the ages of 18 and 25 did not increase at all.
After breaking the results down by demographics, the primary finding held; adults over the age of 26 are using marijuana more often when it is legalized, and they are starting to use it too much.
The grain of salt
As in any study where findings are self-reported, the exact numbers you see here should be taken with a grain of salt. They could be slightly higher or lower. As this study relies on people self-reporting their usage of a drug that is still illegal in many places, it is very possible that the apparent spike in addiction rates is caused by more accurate reporting, as people who live in an area where pot is still illegal may be less likely to report smoking it every day.
And it should be repeated a thousand times over that correlation and causation are not the same thing. There could be some unknown factor causing these increases in each case.
Despite these qualifications, the study is still useful in giving us a general sense of what may happen in states that have yet to legalize.
What does this mean for society and drug users?
While claims of "reefer madness" are greatly exaggerated, marijuana has several well established and thoroughly studied side effects. While occasional use isn't terribly harmful, addiction can be. Lead author Magdalena Cerdá of New York University explains in the study that heavy marijuana use is associated with "psychological and physical health concerns, lower educational attainment, decline in social class, unemployment, and motor vehicle crashes."
A substantial increase in the number of people who are addicted to the stuff will incur costs to society down the line.
Of course, a 37% increase in problematic usage means that the percentage of adults smoking too much went from .9% to 1.23% of the population responding to the survey. This makes it far less prevalent than issues with alcohol, which affected around 6% of all Americans in 2018.
Recently, Big Think's Philip Perry wrote a piece about how legalization could improve the health of millions by allowing the government to regulate the purity of commercially sold marijuana. This remains true. However, it must be weighed against the findings of this study, which suggests that at least some of these health gains will be wiped out by increased addiction rates.
What does this mean for legalization efforts?
The legalization steamroller will undoubtedly keep rolling along. While health concerns are one factor in the debate over marijuana, it is only one of many. In Illinois, where I live, weed will become legal on January 1st of 2020. The legalization campaign and legislation were more concerned with issues of social justice, the failures of prohibition, and finding a new source of tax revenue (since we're half broke) than with matters of potential addiction.
As Vox reports, the authors of the study aren't suggesting that legalization shouldn't take place; that is another, broader debate. They merely wish to present the fact that legalization has a particular side effect that we should be aware of.
While this study is unlikely to change anybody's stance on if weed should be legalized or not, it does show us a critical element to be considered when discussing drug policy. No drug is perfectly safe, and we have reason to believe that legalizing marijuana will mean that more people will have a hard time with it. Let's hope that legalization proponents keep that in mind as they rack up their victories.
For some reason, the bodies of deceased monks stay "fresh" for a long time.
It's definitely happening, and it's definitely weird. After the apparent death of some monks, their bodies remain in a meditating position without decaying for an extraordinary length of time, often as long as two or three weeks.
Tibetan Buddhists, who view death as a process rather than an event, might assert that the spirit has not yet finished with the physical body. For them, thukdam begins with a "clear light" meditation that allows the mind to gradually unspool, eventually dissipating into a state of universal consciousness no longer attached to the body. Only at that time is the body free to die.
Whether you believe this or not, it is a fascinating phenomenon: the fact remains that their bodies don't decompose like other bodies. (There have been a handful of other unexplained instances of delayed decomposition elsewhere in the world.)
The scientific inquiry into just what is going on with thukdam has attracted the attention and support of the Dalai Lama, the highest monk in Tibetan Buddhism. He has reportedly been looking for scientists to solve the riddle for about 20 years. He is a supporter of science, writing, "Buddhism and science are not conflicting perspectives on the world, but rather differing approaches to the same end: seeking the truth."
The most serious study of the phenomenon so far is being undertaken by The Thukdam Project of the University of Wisconsin-Madison's Center for Healthy Minds. Neuroscientist Richard Davidson is one of the founders of the center and has published hundreds of articles about mindfulness.
Davidson first encountered thukdam after his Tibetan monk friend Geshe Lhundub Sopa died, officially on August 28, 2014. Davidson last saw him five days later: "There was absolutely no change. It was really quite remarkable."
The science so far
Credit: GrafiStart / Adobe Stock
The Thukdam Project published its first annual report this winter. It discussed a recent study in which electroencephalograms failed to detect any brain activity in 13 monks who had practiced thukdam and had been dead for at least 26 hours. Davidson was senior author of the study.
While some might be inclined to say, well, that's that, Davidson sees the research as just a first step on a longer road. Philosopher Evan Thompson, who is not involved in The Thukdam Project, tells Tricycle, "If the thinking was that thukdam is something we can measure in the brain, this study suggests that's not the right place to look."
In any event, the question remains: why are these apparently deceased monks so slow to begin decomposition? While environmental factors can slow or speed up the process a bit, usually decomposition begins about four minutes after death and becomes quite obvious over the course of the next day or so.
As the Dalai Lama said:
"What science finds to be nonexistent we should all accept as nonexistent, but what science merely does not find is a completely different matter. An example is consciousness itself. Although sentient beings, including humans, have experienced consciousness for centuries, we still do not know what consciousness actually is: its complete nature and how it functions."
As thukdam researchers continue to seek a signal of post-mortem consciousness of some sort, it's fair to ask what — and where — consciousness is in the first place. It is a question with which Big Think readers are familiar. We write about new theories all the time: consciousness happens on a quantum level; consciousness is everywhere.
So far, though, says Tibetan medical doctor Tawni Tidwell, also a Thukdam Project member, searches beyond the brain for signs of consciousness have gone nowhere. She is encouraged, however, that a number of Tibetan monks have come to the U.S. for medical knowledge that they can take home. When they arrive back in Tibet, she says, "It's not the Westerners who are doing the measuring and poking and prodding. It's the monastics who trained at Emory."
When Olympic athletes perform dazzling feats of athletic prowess, they are using the same principles of physics that gave birth to stars and planets.
- Much of the beauty of gymnastics comes from the physics principle called the conservation of angular momentum.
- Conservation of angular momentum tells us that when a spinning object changes how its matter is distributed, it changes its rate of spin.
- Conservation of angular momentum links the formation of planets in star-forming clouds to the beauty of a gymnast's spinning dismount from the uneven bars.
It is that time again when we watch in awe as Olympic athletes perform dazzling feats of athletic prowess. But as we stare in rapt attention at the speed, grace, and strength they exhibit, it is also a good time to pay attention to how they embody, literally, fundamental principles that shape the entire universe. Yes, I'm talking about physics. On our screens, these athletes are giving us lessons in the principles that giants like Isaac Newton struggled mightily to articulate.
Naturally, there are many Olympic events from which we could learn some basic principles of physics. Swimming shows us hydrodynamic drag. Boxing teaches us about force and impulse. (Ouch!) But today, we will focus on gymnastics and the cosmic importance of the conservation of angular momentum.
The conservation of angular momentum
Much of the beauty of gymnastics comes from the spins and flips athletes perform as they launch themselves into the air from the vault or uneven bars. These are all examples of rotations — and so much of the structure and history of the universe, from planets to galaxies, comes down to the physics of rotating objects. And so much of the physics of rotating objects comes down to the conservation of angular momentum.
Let's start with the conservation of regular or "linear" momentum. Momentum is the product of mass and velocity. Way back in the age of Galileo and Newton, physicists came to understand that in the interactions between bodies, the sum of their momentums had to be conserved (which really means "does not change"). This is a familiar idea to anyone who has played billiards: when a moving pool ball strikes a stationary one, the first ball stops while the second scoots away. The total momentum of the system (the mass times velocity of both balls taken together) is conserved, leaving the originally moving ball unmoving and the originally stationary ball carrying all the system's momentum.
Credit: Sergey Nivens and Victoria VIAR PRO via Adobe Stock
Rotating objects also obey a conservation law, but now it is not just the mass of an object that matters. The distribution of mass — that is, where the mass is located relative to the center of the rotation — is also a factor. Conservation of angular momentum tells us that if a spinning object is not subject to any forces, then any changes in how its matter is distributed must lead to a change in its rate of spin. Comparing the conservation of angular momentum to the conservation of linear momentum, the "distribution of mass" is analogous to mass, and the "rate of spin" is analogous to velocity.
There are many places in cosmic physics where this conservation of angular momentum is key. My favorite example is the formation of stars. Every star begins its life as a giant cloud of slowly spinning interstellar gas. The clouds are usually supported against their own gravitational weight by gas pressure, but sometimes a small nudge from, say, a passing supernova blast wave will force the cloud to begin gravitational collapse. As the cloud begins to shrink, the conservation of angular momentum forces the spin rate of material in the cloud to speed up. As material is falling inward, it also rotates around the cloud's center at ever higher rates. Eventually, some of that gas is going so fast that a balance between the gravity of the newly forming star and what is called centrifugal force is achieved. That stuff then stops moving inward and goes into orbit around the young star, forming a disk, some material of which eventually becomes planets. So, the conservation of angular momentum is, literally, why we have planets in the universe!
Gymnastics, a cosmic sport
How does this appear in gymnastics? When athletes hurl themselves into the air to perform a flip, the only force acting on them is gravity. But since gravity only affects their "center of mass," it cannot apply forces in a way that changes the athlete's spin. But the gymnasts can do that for themselves by using the conservation of angular momentum.
By changing how their mass is arranged, gymnasts can change how fast they spin. You can see this in the dismount phase of the uneven bar competitions. When a gymnast comes off the bars and performs a flip by tucking their legs inward, they can quickly increase their rotation rate in midair. The sudden dramatic increase in the speed of their flip is what makes us gasp in astonishment. It is both scary and a beautiful testament to the athletes' ability to intuitively control the physics of their bodies. And it is also the exact same physics that controls the birth of planets.
"As above so below," goes the old saying. You should keep that in mind as you watch the glory that is the Olympics. That is because it is not just athletes that have this intuitive understanding of physics. We all have it, and we use it every day, from walking down the stairs to swinging a hammer. So, it is no exaggeration to claim that the first place we came to understand the deepest principles of physics was not in contemplating the heavens but moving through the world in our own earthbound flesh.