Once a week.
Subscribe to our weekly newsletter.
Why kids take board games so seriously
How morally developed are you?
- Lawrence Kohlberg's experiments gave children a series of moral dilemmas to test how they differed in their responses across various ages.
- He identified three separate stages of moral development from the egoist to the principled person.
- Some people do not progress through all the stages of moral development, which means they will remain "morally undeveloped."
Has your sense of right and wrong changed over the years? Are there things that you see as acceptable today that you'd never dream of doing when you were younger? If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
These were questions that Lawrence Kohlberg asked, and his "stages of moral development" dominates a lot of moral psychology today.
The Heinz Dilemma
Kohlberg was curious to see how and why children differed in their ethical judgements, and so he gave roughly 60 children, across a variety of ages, a series of moral dilemmas. They were all given open-ended questions to explain their answers in order to minimize the risk of leading them to a certain response.
For instance, one of the better-known dilemmas involved an old man called Heinz who needed an expensive drug for his dying wife. Heinz only managed to raise half the required money, which the pharmacists wouldn't accept. Unable to afford it, he has only three options. What should he do?
(a) Not steal it because it's breaking the law.
(b) Steal it, and go to jail for breaking the law.
(c) Steal it, but be let off a prison sentence.
What option would you choose?
Stages of Moral Development
From the answers he got, Kohlberg identified three definite levels or stages of our moral development.
Pre-conventional stage. This is characterized by an ego-centric attitude that seeks pleasure and to prevent pain. The primary motivation is to avoid punishment or claim a reward. In this stage of moral development, "good" is defined as whatever is beneficial to oneself. "Bad" is the opposite. For instance, a young child might share their food with a younger sibling not from kindness or some altruistic impulse but because they know that they'll be praised by their parents (or, perhaps, have their food taken away from them).
In the pre-conventional stage, there is no inherent sense of right and wrong, per se, but rather "good" is associated with reward and "bad" is associated with punishment. At this stage, children are sort of like puppies.
If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
Conventional stage. This stage reflects a growing sense of social belonging and hence a higher regard for others. Approval and praise are seen as rewards, and behavior is calibrated to please others, obey the law, and promote the good of the family/tribe/nation. In the conventional stage, a person comes to see themselves as part of a community and that their actions have consequences.
Consequently, this stage is much more rule-focused and comes along with a desire to be seen as good. Image, reputation, and prestige matter the most in motivating good behavior — we want to fit into our community.
Post-conventional stage. In this final stage, there is much more self-reflection and moral reasoning, which gives people the capacity to challenge authority. Committing to principles is considered more important than blindly obeying fixed laws. Importantly, a person comes to understand the difference between what is "legal" and what is "right." Ideas such as justice and fairness start to mature. Laws or rules are no longer equated to morality but might be seen as imperfect manifestations of larger principles.
A lot of moral philosophy is only possible in the post-conventional stage. Theories like utilitarianism or Immanuel Kant's duty-focused ethics ask us to consider what's right or wrong in itself, not just because we get a reward or look good to others. Aristotle perhaps sums it up best when he wrote, "I have gained this from philosophy: that I do without being commanded what others do only from fear of the law."
How morally developed are you?
Kohlberg identified these stages as a developmental progression from early infancy all the way to adulthood, and they map almost perfectly onto Jean Piaget's psychology of child development. For instance, the pre-conventional stage usually lasts from birth to roughly nine years old, the conventional occurs mainly during adolescence, and the post-conventional goes into adulthood.
What's important to note, though, is that this is not a fatalistic timetable to which all humans adhere. Kohlberg thought, for instance, that some people never progress or mature. It's quite possible, maybe, for someone to have no actual moral compass at all (which is sometimes associated with psychopathy).
More commonly, though, we all know people who are resolutely bound to the conventional stage, where they care only for their image or others' judgment. Those who do not develop beyond this stage are usually stubbornly, even aggressively, strict in following the rules or the law. Prepubescent children can be positively authoritarian when it comes to obeying the rules of a board game, for instance.
So, what's your answer to the Heinz dilemma? Where do you fall on Kohlberg's moral development scale? Is he right to view it is a progressive, hierarchical maturing, where we have "better" and "worse" stages? Or could it be that as we grow older, we grow more immoral?
Ever since we've had the technology, we've looked to the stars in search of alien life. It's assumed that we're looking because we want to find other life in the universe, but what if we're looking to make sure there isn't any?
Here's an equation, and a rather distressing one at that: N = R* × fP × ne × f1 × fi × fc × L. It's the Drake equation, and it describes the number of alien civilizations in our galaxy with whom we might be able to communicate. Its terms correspond to values such as the fraction of stars with planets, the fraction of planets on which life could emerge, the fraction of planets that can support intelligent life, and so on. Using conservative estimates, the minimum result of this equation is 20. There ought to be 20 intelligent alien civilizations in the Milky Way that we can contact and who can contact us. But there aren't any.
The Drake equation is an example of a broader issue in the scientific community—considering the sheer size of the universe and our knowledge that intelligence life has evolved at least once, there should be evidence for alien life. This is generally referred to as the Fermi paradox, after the physicist Enrico Fermi who first examined the contradiction between high probability of alien civilizations and their apparent absence. Fermi summed this up rather succinctly when he asked, “Where is everybody"?
But maybe this was the wrong question. A better question, albeit a more troubling one, might be “What happened to everybody?" Unlike asking where life exists in the universe, there's a clearer potential answer to this question: the Great Filter.
Why the universe is empty
Alien life is likely, but there is none that we can see. Therefore, it could be the case that somewhere along the trajectory of life's development, there is a massive and common challenge that ends alien life before it becomes intelligent enough and widespread enough for us to see—a great filter.
This filter could take many forms. It could be that having a planet in the Goldilocks' zone—the narrow band around a star where it is neither too hot nor too cold for life to exist—and having that planet contain organic molecules capable of accumulating into life is extremely unlikely. We've observed plenty of planets in the Goldilock's zone of different stars (there's estimated to be 40 billion in the Milky Way), but maybe the conditions still aren't right there for life to exist.
The Great Filter could occur at the very earliest stages of life. When you were in high school bio, you might have the refrain drilled into your head “mitochondria are the powerhouse of the cell." I certainly did. However, mitochondria were at one point a separate bacteria living its own existence. At some point on Earth, a single-celled organism tried to eat one of these bacteria, except instead of being digested, the bacterium teamed up with the cell, producing extra energy that enabled the cell to develop in ways leading to higher forms of life. An event like this might be so unlikely that it's only happened once in the Milky Way.
Or, the filter could be the development of large brains, as we have. After all, we live on a planet full of many creatures, and the kind of intelligence humans have has only occurred once. It may be overwhelmingly likely that living creatures on other planets simply don't need to evolve the energy-demanding neural structures necessary for intelligence.
What if the filter is ahead of us?
These possibilities assume that the Great Filter is behind us—that humanity is a lucky species that overcame a hurdle almost all other life fails to pass. This might not be the case, however; life might evolve to our level all the time but get wiped out by some unknowable catastrophe. Discovering nuclear power is a likely event for any advanced society, but it also has the potential to destroy such a society. Utilizing a planet's resources to build an advanced civilization also destroys the planet: the current process of climate change serves as an example. Or, it could be something entirely unknown, a major threat that we can't see and won't see until it's too late.
The bleak, counterintuitive suggestion of the Great Filter is that it would be a bad sign for humanity to find alien life, especially alien life with a degree of technological advancement similar to our own. If our galaxy is truly empty and dead, it becomes more likely that we've already passed through the Great Filter. The galaxy could be empty because all other life failed some challenge that humanity passed.
If we find another alien civilization, but not a cosmos teeming with a variety of alien civilizations, the implication is that the Great Filter lies ahead of us. The galaxy should be full of life, but it is not; one other instance of life would suggest that the many other civilizations that should be there were wiped out by some catastrophe that we and our alien counterparts have yet to face.
Fortunately, we haven't found any life. Although it might be lonely, it means humanity's chances at long-term survival are a bit higher than otherwise.
Cross-disciplinary cooperation is needed to save civilization.
- There is a great disconnect between the sciences and the humanities.
- Solutions to most of our real-world problems need both ways of knowing.
- Moving beyond the two-culture divide is an essential step to ensure our project of civilization.
For the past five years, I ran the Institute for Cross-Disciplinary Engagement at Dartmouth, an initiative sponsored by the John Templeton Foundation. Our mission has been to find ways to bring scientists and humanists together, often in public venues or — after Covid-19 — online, to discuss questions that transcend the narrow confines of a single discipline.
It turns out that these questions are at the very center of the much needed and urgent conversation about our collective future. While the complexity of the problems we face asks for a multi-cultural integration of different ways of knowing, the tools at hand are scarce and mostly ineffective. We need to rethink and learn how to collaborate productively across disciplinary cultures.
The danger of hyper-specialization
The explosive expansion of knowledge that started in the mid 1800s led to hyper-specialization inside and outside academia. Even within a single discipline, say philosophy or physics, professionals often don't understand one another. As I wrote here before, "This fragmentation of knowledge inside and outside of academia is the hallmark of our times, an amplification of the clash of the Two Cultures that physicist and novelist C.P. Snow admonished his Cambridge colleagues in 1959." The loss is palpable, intellectually and socially. Knowledge is not adept to reductionism. Sure, a specialist will make progress in her chosen field, but the tunnel vision of hyper-specialization creates a loss of context: you do the work not knowing how it fits into the bigger picture or, more alarmingly, how it may impact society.
Many of the existential risks we face today — AI and its impact on the workforce, the dangerous loss of privacy due to data mining and sharing, the threat of cyberwarfare, the threat of biowarfare, the threat of global warming, the threat of nuclear terrorism, the threat to our humanity by the development of genetic engineering — are consequences of the growing ease of access to cutting-edge technologies and the irreversible dependence we all have on our gadgets. Technological innovation is seductive: we want to have the latest "smart" phone, 5k TV, and VR goggles because they are objects of desire and social placement.
Are we ready for the genetic revolution?
When the time comes, and experts believe it is coming sooner than we expect or are prepared for, genetic meddling with the human genome may drive social inequality to an unprecedented level with not just differences in wealth distribution but in what kind of being you become and who retains power. This is the kind of nightmare that Nobel Prize-winning geneticist Jennifer Doudna talked about in a recent Big Think video.
CRISPR 101: Curing Sickle Cell, Growing Organs, Mosquito Makeovers | Jennifer Doudna | Big Think www.youtube.com
At the heart of these advances is the dual-use nature of science, its light and shadow selves. Most technological developments are perceived and sold as spectacular advances that will either alleviate human suffering or bring increasing levels of comfort and accessibility to a growing number of people. Curing diseases is what motivated Doudna and other scientists involved with CRISPR research. But with that also came the potential for altering the genetic makeup of humanity in ways that, again, can be used for good or evil purposes.
This is not a sci-fi movie plot. The main difference between biohacking and nuclear hacking is one of scale. Nuclear technologies require industrial-level infrastructure, which is very costly and demanding. This is why nuclear research and its technological implementation have been mostly relegated to governments. Biohacking can be done in someone's backyard garage with equipment that is not very costly. The Netflix documentary series Unnatural Selection brings this point home in terrifying ways. The essential problem is this: once the genie is out of the bottle, it is virtually impossible to enforce any kind of control. The genie will not be pushed back in.
Cross-disciplinary cooperation is needed to save civilization
What, then, can be done? Such technological challenges go beyond the reach of a single discipline. CRISPR, for example, may be an invention within genetics, but its impact is vast, asking for oversight and ethical safeguards that are far from our current reality. The same with global warming, rampant environmental destruction, and growing levels of air pollution/greenhouse gas emissions that are fast emerging as we crawl into a post-pandemic era. Instead of learning the lessons from our 18 months of seclusion — that we are fragile to nature's powers, that we are co-dependent and globally linked in irreversible ways, that our individual choices affect many more than ourselves — we seem to be bent on decompressing our accumulated urges with impunity.
The experience from our experiment with the Institute for Cross-Disciplinary Engagement has taught us a few lessons that we hope can be extrapolated to the rest of society: (1) that there is huge public interest in this kind of cross-disciplinary conversation between the sciences and the humanities; (2) that there is growing consensus in academia that this conversation is needed and urgent, as similar institutes emerge in other schools; (3) that in order for an open cross-disciplinary exchange to be successful, a common language needs to be established with people talking to each other and not past each other; (4) that university and high school curricula should strive to create more courses where this sort of cross-disciplinary exchange is the norm and not the exception; (5) that this conversation needs to be taken to all sectors of society and not kept within isolated silos of intellectualism.
Moving beyond the two-culture divide is not simply an interesting intellectual exercise; it is, as humanity wrestles with its own indecisions and uncertainties, an essential step to ensure our project of civilization.
New study analyzes gravitational waves to confirm the late Stephen Hawking's black hole area theorem.
- A new paper confirms Stephen Hawking's black hole area theorem.
- The researchers used gravitational wave data to prove the theorem.
- The data came from Caltech and MIT's Advanced Laser Interferometer Gravitational-Wave Observatory.
The late Stephen Hawking's black hole area theorem is correct, a new study shows. Scientists used gravitational waves to prove the famous British physicist's idea, which may lead to uncovering more underlying laws of the universe.
The theorem, elaborated by Hawking in 1971, uses Einstein's theory of general relativity as a springboard to conclude that it is not possible for the surface area of a black hole to become smaller over time. The theorem parallels the second law of thermodynamics that says the entropy (disorder) of a closed system can't decrease over time. Since the entropy of a black hole is proportional to its surface area, both must continue to increase.
As a black hole gobbles up more matter, its mass and surface area grow. But as it grows, it also spins faster, which decreases its surface area. Hawking's theorem maintains that the increase in surface area that comes from the added mass would always be larger than the decrease in surface area because of the added spin.
Will Farr, one of the co-authors of the study that was published in Physical Review Letters, said their finding demonstrates that "black hole areas are something fundamental and important." His colleague Maximiliano Isi agreed in an interview with Live Science: "Black holes have an entropy, and it's proportional to their area. It's not just a funny coincidence, it's a deep fact about the world that they reveal."
What are gravitational waves?
Gravitational waves are "ripples" in spacetime, predicted by Albert Einstein in 1916, that are created by very violent processes happening in space. Einstein showed that very massive, accelerating space objects like neutron stars or black holes that orbit each other could cause disturbances in spacetime. Like the ripples produced by tossing a rock into a lake, they would bring about "waves" of spacetime that would spread in all directions.
As LIGO shared, "These cosmic ripples would travel at the speed of light, carrying with them information about their origins, as well as clues to the nature of gravity itself."
The gravitational waves discovered by LIGO's 3,000-kilometer-long laser beam, which can detect the smallest distortions in spacetime, were generated 1.3 billion years ago by two giant black holes that were quickly spiraling toward each other.
What Stephen Hawking would have discovered if he lived longer | NASA's Michelle Thaller | Big Think www.youtube.com
Confirming Hawking's black hole area theorem
The researchers separated the signal into two parts, depending on whether it was from before or after the black holes merged. This allowed them to figure out the mass and spin of the original black holes as well as the mass and spin of the merged black hole. With this information, they calculated the surface areas of the black holes before and after the merger.
"As they spin around each other faster and faster, the gravitational waves increase in amplitude more and more until they eventually plunge into each other — making this big burst of waves," Isi elaborated. "What you're left with is a new black hole that's in this excited state, which you can then study by analyzing how it's vibrating. It's like if you ping a bell, the specific pitches and durations it rings with will tell you the structure of that bell, and also what it's made out of."
The surface area of the resulting black holes was larger than the combined area of the original black holes. This conformed to Hawking's area law.