Once a week.
Subscribe to our weekly newsletter.
How we make moral decisions
In some situations, asking "what if everyone did that?" is a common strategy for judging whether an action is right or wrong.
It probably won't have a big impact on the financial well-being of your local transportation system. But now ask yourself, "What if everyone did that?" The outcome is much different — the system would likely go bankrupt and no one would be able to ride the train anymore.
Moral philosophers have long believed this type of reasoning, known as universalization, is the best way to make moral decisions. But do ordinary people spontaneously use this kind of moral judgment in their everyday lives?
In a study of several hundred people, MIT and Harvard University researchers have confirmed that people do use this strategy in particular situations called "threshold problems." These are social dilemmas in which harm can occur if everyone, or a large number of people, performs a certain action. The authors devised a mathematical model that quantitatively predicts the judgments they are likely to make. They also showed, for the first time, that children as young as 4 years old can use this type of reasoning to judge right and wrong.
"This mechanism seems to be a way that we spontaneously can figure out what are the kinds of actions that I can do that are sustainable in my community," says Sydney Levine, a postdoc at MIT and Harvard and the lead author of the study.
Other authors of the study are Max Kleiman-Weiner, a postdoc at MIT and Harvard; Laura Schulz, an MIT professor of cognitive science; Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of MIT's Center for Brains, Minds, and Machines and Computer Science and Artificial Intelligence Laboratory (CSAIL); and Fiery Cushman, an assistant professor of psychology at Harvard. The paper is appearing this week in the Proceedings of the National Academy of Sciences.
The concept of universalization has been included in philosophical theories since at least the 1700s. Universalization is one of several strategies that philosophers believe people use to make moral judgments, along with outcome-based reasoning and rule-based reasoning. However, there have been few psychological studies of universalization, and many questions remain regarding how often this strategy is used, and under what circumstances.
To explore those questions, the MIT/Harvard team asked participants in their study to evaluate the morality of actions taken in situations where harm could occur if too many people perform the action. In one hypothetical scenario, John, a fisherman, is trying to decide whether to start using a new, more efficient fishing hook that will allow him to catch more fish. However, if every fisherman in his village decided to use the new hook, there would soon be no fish left in the lake.
The researchers found that many subjects did use universalization to evaluate John's actions, and that their judgments depended on a variety of factors, including the number of people who were interested in using the new hook and the number of people using it that would trigger a harmful outcome.
To tease out the impact of those factors, the researchers created several versions of the scenario. In one, no one else in the village was interested in using the new hook, and in that scenario, most participants deemed it acceptable for John to use it. However, if others in the village were interested but chose not to use it, then John's decision to use it was judged to be morally wrong.
The researchers also found that they could use their data to create a mathematical model that explains how people take different factors into account, such as the number of people who want to do the action and the number of people doing it that would cause harm. The model accurately predicts how people's judgments change when these factors change.
In their last set of studies, the researchers created scenarios that they used to test judgments made by children between the ages of 4 and 11. One story featured a child who wanted to take a rock from a path in a park for his rock collection. Children were asked to judge if that was OK, under two different circumstances: In one, only one child wanted a rock, and in the other, many other children also wanted to take rocks for their collections.
The researchers found that most of the children deemed it wrong to take a rock if everyone wanted to, but permissible if there was only one child who wanted to do it. However, the children were not able to specifically explain why they had made those judgments.
"What's interesting about this is we discovered that if you set up this carefully controlled contrast, the kids seem to be using this computation, even though they can't articulate it," Levine says. "They can't introspect on their cognition and know what they're doing and why, but they seem to be deploying the mechanism anyway."
In future studies, the researchers hope to explore how and when the ability to use this type of reasoning develops in children.
In the real world, there are many instances where universalization could be a good strategy for making decisions, but it's not necessary because rules are already in place governing those situations.
"There are a lot of collective action problems in our world that can be solved with universalization, but they're already solved with governmental regulation," Levine says. "We don't rely on people to have to do that kind of reasoning, we just make it illegal to ride the bus without paying."
However, universalization can still be useful in situations that arise suddenly, before any government regulations or guidelines have been put in place. For example, at the beginning of the Covid-19 pandemic, before many local governments began requiring masks in public places, people contemplating wearing masks might have asked themselves what would happen if everyone decided not to wear one.
The researchers now hope to explore the reasons why people sometimes don't seem to use universalization in cases where it could be applicable, such as combating climate change. One possible explanation is that people don't have enough information about the potential harm that can result from certain actions, Levine says.
The research was funded by the John Templeton Foundation, the Templeton World Charity Foundation, and the Center for Brains, Minds, and Machines.
- Decisions are largely emotional, not logical - Big Think ›
- 4 Tips to Help You Make Better, More Ethical Decisions - Big Think ›
- Moral psychology: Why do people break their own moral codes? - Big Think ›
- The four moral judgments you make everyday - Big Think ›
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
A machine learning system lets visitors at a Kandinsky exhibition hear the artwork.
Have you ever heard colors?
As part of a new exhibition, the worlds of culture and technology collide, bringing sound to the colors of abstract art pioneer Wassily Kandinsky.
Kandinsky had synesthesia, where looking at colors and shapes causes some with the condition to hear associated sounds. With the help of machine learning, virtual visitors to the Sounds Like Kandinsky exhibition, a partnership project by Centre Pompidou in Paris and Google Arts & Culture, can have an aural experience of his art.
An eye for music
Kandinsky's synesthesia is thought to have heavily influenced his painting. Seeing yellow summoned up trumpets, evoking emotions like cheekiness; reds produced violins portraying restlessness; while organs representing heavenliness he associated with blues, according to the exhibition notes.
Virtual visitors are invited to take part in an experiment called Play a Kandinsky, which allows them to see and hear the world through the artist's eyes.
Kandinsky's synesthesia is thought to have heavily influenced his 1925 painting Yellow, Red, Blue.Image: Guillaume Piolle/Wikimedia Commons
In 1925, the artist's masterpiece, "Yellow, Red, Blue", broke new ground in the world of abstract art, guiding the viewer from left to right with shifting shapes and shades. Almost a century after it was painted, Google's interactive tool lets visitors click different parts of the artwork to journey through the artist's description of the colors, associated sounds and moods that inspired the work.
But Google's new toy is not the only tool developed to enhance the artistic experience.
Artist Neil Harbisson has developed an artificial way to emulate Kandinsky by turning colors into sounds. He has a rare form of color blindness and sees the world in greyscale. But a smart antenna attached to his head translates dominant colors into musical notes, creating a real-world soundtrack of what's in front of him. The invention could open up a new world for people who are color blind.
A new study suggests that private prisons hold prisoners for a longer period of time, wasting the cost savings that private prisons are supposed to provide over public ones.
- Private prisons in Mississippi tend to hold prisoners 90 days longer than public ones.
- The extra days eat up half of the expected cost savings of a private prison.
- The study leaves several open questions, such as what affect these extra days have on recidivism rates.
The United States of America, land of the free, is home to 5 percent of the world's population but 25 percent of its prisoners. The cost of having so many people in the penal system adds up to $80 billion per year, more than three times the budget for NASA. This massive system exploded in size relatively recently, with the prison population increasing by six-fold in the last four decades.
Ten percent of these prisoners are kept in private prisons, which are owned and operated for the sake of profit by contractors. In theory, these operations cost less than public prisons and jails, and states can save money by contracting them to incarcerate people. They have a long history in the United States and are used in many other countries as well.
However, despite the pervasiveness of private contractors in the American prison system, there is not much research into how well they live up to their promise to provide similar services at a lower cost to the state. The little research that is available often encounters difficulties in trying to compare the costs and benefits of facilities with vastly different operations and occasionally produces results suggesting there are few benefits to privatization.
A new study by Dr. Anita Mukherjee and published in the American Economic Journal: Economic Policy joins the debate with a robust consideration of the costs and benefits of private prisons. Its findings suggest that some private prisons keep people incarcerated longer and save less money than advertised.
The study focuses on prisons in Mississippi. Despite its comparatively high rate of incarceration, Mississippi's prison system is very similar to that of other states that also use private prisons. Demographically, its system is representative of the rest of the U.S. prison system, and its inmates are sentenced for similar amounts of time.
The state attempts to get the most out of its privatization efforts, as a 1994 law requires all contracts for private prisons in Mississippi to provide at least a 10 percent cost savings over public prisons while providing similar services. As a result, the state seeks to maximize its savings by sending prisoners to private institutions first if space if available.
While public and private prisons in Mississippi are quite similar, there are a few differences that allow for the possibility of cost savings by private operators — not the least of which is that the guards are paid 30 percent less and have fewer benefits than their publicly employed counterparts.
The results of privatization
The graph depicts the likelihood of release for public (dotted line) vs. private (solid line) prison inmates. At every level of time served, public prisoners were more likely to be released than private prisoners.Dr. Anita Mukherjee
The study relied on administrative records of the Mississippi prison system between 1996 and 2013. The data included information on prisoner demographics, the crimes committed, sentence lengths, time served, infractions while incarcerated, and prisoner relocation while in the system, including between public and private jails. For this study, the sample examined was limited to those serving between one and six years and those who served at least a quarter of their sentence. This created a primary sample of 26,563 bookings.
Analysis revealed that prisoners in private prisons were behind bars for four to seven percent longer than those in public prisons, which translates to roughly 85 to 90 extra days per prisoner. This is, in part, because those in private prison serve a greater portion of their sentences (73 percent) than those in public institutions (70 percent).
This in turn might be due to the much higher infraction rate in private prisons compared to public ones. While only 18 percent of prisoners in a public prison commit an infraction, such as disobeying a guard or possessing contraband, the number jumps to 46 percent in a private prison. Infractions can reduce the probability of early release or cause time to be added to a sentence.
It's unclear why there are so many more infractions in private prisons. Dr. Mukherjee suggests it could be the result of "harsher prison conditions in private prisons," better monitoring techniques, incentives to report more of them to the state before contract renewals, or even a lackadaisical attitude on the part of public prison employees.
What does all this cost Mississippi?
The extra time served eats 48 percent of the cost savings of keeping prisoners in a private facility. For example, it costs about $135,000 to house a prisoner in a private prison for three years and $150,000 in the public system. But longer stays in private prisons reduce the savings from $15,000 to only $7,800.
As Dr. Mukherjee remarks, this cost is also just the finance. Some things are a little harder to measure:
"There are, of course, other costs that are difficult to quantify — e.g., the cost of injustice to society (if private prison inmates systematically serve more time), the inmate's individual value of freedom, and impacts of the additional incarceration on future employment. Abrams and Rohlfs (2011) estimates a prisoner's value of freedom for 90 days at about $1,100 using experimental variation in bail setting. Mueller-Smith (2017) estimates that 90 days of marginal incarceration costs about $15,000 in reduced wages and increased reliance on welfare. If these social costs were to exceed $7,800 in the example stated, private prisons would no longer offer a bargain in terms of welfare-adjusted cost savings."
It is possible that the extra time in jail provides benefits that counter these costs, such as a reduced recidivism rate, but this proved difficult to determine. Though it was not statistically significant, there was some evidence that the added time actually increased the rate of recidivism. If that's true, then private prisons could be counterproductive.