- Lawrence Kohlberg's experiments gave children a series of moral dilemmas to test how they differed in their responses across various ages.
- He identified three separate stages of moral development from the egoist to the principled person.
- Some people do not progress through all the stages of moral development, which means they will remain "morally undeveloped."
Has your sense of right and wrong changed over the years? Are there things that you see as acceptable today that you'd never dream of doing when you were younger? If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
These were questions that Lawrence Kohlberg asked, and his "stages of moral development" dominates a lot of moral psychology today.
The Heinz Dilemma
Kohlberg was curious to see how and why children differed in their ethical judgements, and so he gave roughly 60 children, across a variety of ages, a series of moral dilemmas. They were all given open-ended questions to explain their answers in order to minimize the risk of leading them to a certain response.
For instance, one of the better-known dilemmas involved an old man called Heinz who needed an expensive drug for his dying wife. Heinz only managed to raise half the required money, which the pharmacists wouldn't accept. Unable to afford it, he has only three options. What should he do?
(a) Not steal it because it's breaking the law.
(b) Steal it, and go to jail for breaking the law.
(c) Steal it, but be let off a prison sentence.
What option would you choose?
Stages of Moral Development
From the answers he got, Kohlberg identified three definite levels or stages of our moral development.
Pre-conventional stage. This is characterized by an ego-centric attitude that seeks pleasure and to prevent pain. The primary motivation is to avoid punishment or claim a reward. In this stage of moral development, "good" is defined as whatever is beneficial to oneself. "Bad" is the opposite. For instance, a young child might share their food with a younger sibling not from kindness or some altruistic impulse but because they know that they'll be praised by their parents (or, perhaps, have their food taken away from them).
In the pre-conventional stage, there is no inherent sense of right and wrong, per se, but rather "good" is associated with reward and "bad" is associated with punishment. At this stage, children are sort of like puppies.
If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
Conventional stage. This stage reflects a growing sense of social belonging and hence a higher regard for others. Approval and praise are seen as rewards, and behavior is calibrated to please others, obey the law, and promote the good of the family/tribe/nation. In the conventional stage, a person comes to see themselves as part of a community and that their actions have consequences.
Consequently, this stage is much more rule-focused and comes along with a desire to be seen as good. Image, reputation, and prestige matter the most in motivating good behavior — we want to fit into our community.
Post-conventional stage. In this final stage, there is much more self-reflection and moral reasoning, which gives people the capacity to challenge authority. Committing to principles is considered more important than blindly obeying fixed laws. Importantly, a person comes to understand the difference between what is "legal" and what is "right." Ideas such as justice and fairness start to mature. Laws or rules are no longer equated to morality but might be seen as imperfect manifestations of larger principles.
A lot of moral philosophy is only possible in the post-conventional stage. Theories like utilitarianism or Immanuel Kant's duty-focused ethics ask us to consider what's right or wrong in itself, not just because we get a reward or look good to others. Aristotle perhaps sums it up best when he wrote, "I have gained this from philosophy: that I do without being commanded what others do only from fear of the law."
How morally developed are you?
Kohlberg identified these stages as a developmental progression from early infancy all the way to adulthood, and they map almost perfectly onto Jean Piaget's psychology of child development. For instance, the pre-conventional stage usually lasts from birth to roughly nine years old, the conventional occurs mainly during adolescence, and the post-conventional goes into adulthood.
What's important to note, though, is that this is not a fatalistic timetable to which all humans adhere. Kohlberg thought, for instance, that some people never progress or mature. It's quite possible, maybe, for someone to have no actual moral compass at all (which is sometimes associated with psychopathy).
More commonly, though, we all know people who are resolutely bound to the conventional stage, where they care only for their image or others' judgment. Those who do not develop beyond this stage are usually stubbornly, even aggressively, strict in following the rules or the law. Prepubescent children can be positively authoritarian when it comes to obeying the rules of a board game, for instance.
So, what's your answer to the Heinz dilemma? Where do you fall on Kohlberg's moral development scale? Is he right to view it is a progressive, hierarchical maturing, where we have "better" and "worse" stages? Or could it be that as we grow older, we grow more immoral?
This spring, a U.S. and Chinese team announced that it had successfully grown, for the first time, embryos that included both human and monkey cells.
In the novel, technicians in charge of the hatcheries manipulate the nutrients they give the fetuses to make the newborns fit the desires of society. Two recent scientific developments suggest that Huxley's imagined world of functionally manufactured people is no longer far-fetched.
On March 17, 2021, an Israeli team announced that it had grown mouse embryos for 11 days – about half of the gestation period – in artificial wombs that were essentially bottles. Until this experiment, no one had grown a mammal embryo outside a womb this far into pregnancy. Then, on April 15, 2021, a U.S. and Chinese team announced that it had successfully grown, for the first time, embryos that included both human and monkey cells in plates to a stage where organs began to form.
As both a philosopher and a biologist I cannot help but ask how far researchers should take this work. While creating chimeras – the name for creatures that are a mix of organisms – might seem like the more ethically fraught of these two advances, ethicists think the medical benefits far outweigh the ethical risks. However, ectogenesis could have far-reaching impacts on individuals and society, and the prospect of babies grown in a lab has not been put under nearly the same scrutiny as chimeras.
Mouse embryos were grown in an artificial womb for 11 days, and organs had begun to develop.
Growing in an artificial womb
When in vitro fertilization first emerged in the late 1970s, the press called IVF embryos “test-tube babies," though they are nothing of the sort. These embryos are implanted into the uterus within a day or two after doctors fertilize an egg in a petri dish.
Before the Israeli experiment, researchers had not been able to grow mouse embryos outside the womb for more than four days – providing the embryos with enough oxygen had been too hard. The team spent seven years creating a system of slowly spinning glass bottles and controlled atmospheric pressure that simulates the placenta and provides oxygen.
This development is a major step toward ectogenesis, and scientists expect that it will be possible to extend mouse development further, possibly to full term outside the womb. This will likely require new techniques, but at this point it is a problem of scale – being able to accommodate a larger fetus. This appears to be a simpler challenge to overcome than figuring out something totally new like supporting organ formation.
The Israeli team plans to deploy its techniques on human embryos. Since mice and humans have similar developmental processes, it is likely that the team will succeed in growing human embryos in artificial wombs.
To do so, though, members of the team need permission from their ethics board.
CRISPR – a technology that can cut and paste genes – already allows scientists to manipulate an embryo's genes after fertilization. Once fetuses can be grown outside the womb, as in Huxley's world, researchers will also be able to modify their growing environments to further influence what physical and behavioral qualities these parentless babies exhibit. Science still has a way to go before fetus development and births outside of a uterus become a reality, but researchers are getting closer. The question now is how far humanity should go down this path.
Chimeras evoke images of mythological creatures of multiple species – like this 15th-century drawing of a griffin – but the medical reality is much more sober. (Martin Schongauer/WikimediaCommons)
Human–monkey hybrids might seem to be a much scarier prospect than babies born from artificial wombs. But in fact, the recent research is more a step toward an important medical development than an ethical minefield.
If scientists can grow human cells in monkeys or other animals, it should be possible to grow human organs too. This would solve the problem of organ shortages around the world for people needing transplants.
But keeping human cells alive in the embryos of other animals for any length of time has proved to be extremely difficult. In the human-monkey chimera experiment, a team of researchers implanted 25 human stem cells into embryos of crab-eating macaques – a type of monkey. The researchers then grew these embryos for 20 days in petri dishes.
After 15 days, the human stem cells had disappeared from most of the embryos. But at the end of the 20-day experiment, three embryos still contained human cells that had grown as part of the region of the embryo where they were embedded. For scientists, the challenge now is to figure out how to maintain human cells in chimeric embryos for longer.
Regulating these technologies
Some ethicists have begun to worry that researchers are rushing into a future of chimeras without adequate preparation. Their main concern is the ethical status of chimeras that contain human and nonhuman cells – especially if the human cells integrate into sensitive regions such as a monkey's brain. What rights would such creatures have?
However, there seems to be an emerging consensus that the potential medical benefits justify a step-by-step extension of this research. Many ethicists are urging public discussion of appropriate regulation to determine how close to viability these embryos should be grown. One proposed solution is to limit growth of these embryos to the first trimester of pregnancy. Given that researchers don't plan to grow these embryos beyond the stage when they can harvest rudimentary organs, I don't believe chimeras are ethically problematic compared with the true test–tube babies of Huxley's world.
Few ethicists have broached the problems posed by the ability to use ectogenesis to engineer human beings to fit societal desires. Researchers have yet to conduct experiments on human ectogenesis, and for now, scientists lack the techniques to bring the embryos to full term. However, without regulation, I believe researchers are likely to try these techniques on human embryos – just as the now-infamous He Jiankui used CRISPR to edit human babies without properly assessing safety and desirability. Technologically, it is a matter of time before mammal embryos can be brought to term outside the body.
While people may be uncomfortable with ectogenesis today, this discomfort could pass into familiarity as happened with IVF. But scientists and regulators would do well to reflect on the wisdom of permitting a process that could allow someone to engineer human beings without parents. As critics have warned in the context of CRISPR-based genetic enhancement, pressure to change future generations to meet societal desires will be unavoidable and dangerous, regardless of whether that pressure comes from an authoritative state or cultural expectations. In Huxley's imagination, hatcheries run by the state grew a large numbers of identical individuals as needed. That would be a very different world from today.
Sahotra Sarkar, Professor of Philosophy and Integrative Biology, The University of Texas at Austin College of Liberal Arts
Could a pill make you more moral? Should you take it if it could?
- Moral enhancement is the idea that technology can be used to make us more moral people.
- Proponents argue that we need to be better people in order to solve global problems.
- Ideas on how to use this ethically abound, but no solid consensus exists yet.
People have been artificially enhancing themselves for a long time. Caffeine and other stimulants improve our cognitive performance and might have made the enlightenment possible. More controversially, some athletes use steroids to enhance their athletic performance beyond what would naturally be possible for them.
These aren't the only ways that we can use science and technology to improve our performance, of course. In the last few years, some philosophers have argued that we can, and perhaps should, use these tools to enhance our moral abilities to become a more cooperative, empathetic, or properly motivated species.
Moral enhancement explained
The term "moral enhancement" was first used in a 2008 essay by Tom Douglas. It generally refers to biomedical enhancements but can refer to any technological attempt to make humans more moral. While one could debate what "more moral" means, the literature on the subject focuses on ideas of making people more cooperative, altruistic, and the like.
I reached out to Dr. Joao Fabiano, a Visiting Fellow at Harvard University's Safra Center for Ethics, for more information. He expanded on the idea of moral enhancement and provided the motivation for it.
We all sometimes behave worse than we think we should but have a hard time improving. Moral enhancement would be a technological intervention that helps us behave as we should. There is often a certain pattern to our moral failures shared by most of us. As the neuroscience of morality progresses, we might be able to fix these failures with technology. In fact, we urgently need moral enhancement given the grave social problems these moral failures create and their ingrained biological nature...
...Many of these recurrent moral failures are connected to grave problems in society, such as our inability to tackle global threats (global warming, nuclear proliferation, and pandemics) and grave injustices. Often, these failures can be explained by evolutionary science; they are deep-seated adaptations hardwired in our brains which we can, sometimes, costly and partially control with improved social norms. For instance, many forms of group favoritism and discrimination, such as racism, are to some degree evolved adaptations to an ancestral environment where groups were small and at constant war, and long-distance trade was limited. As neuroscience continues to uncover the biological modulators of our moral behaviour, we might soon be able to reliably influence that behavior with technological interventions.
Ways to make people more moral
Several studies have demonstrated that the moral actions people take can be influenced with biomedical interventions. One found that people will be more aggressive and more likely to violate social norms when their serotonin levels are artificially lowered. Another found that increasing serotonin levels made people harm-averse and more likely to stick to ideas of fairness. Lowering the amount of tryptophan, a precursor to serotonin and melatonin, that people have in their system makes them less cooperative.
Outside of the laboratory, some commonly used drugs, such as painkillers and antidepressants, are also known to slightly modify moral decision-making. Remember that next time you try to make a decision after taking some acetaminophen. The painkiller Tylenol also kills empathy.
Dr. Fabiano points out that the widespread use of these drugs means that "technology is already interfering with our morality, sometimes in undesirable and unpredictable ways." He adds, "We should, at the very least, try to take control of that to produce desirable changes."
He also mentioned, however, that no drug that can reliably enhance moral behavior currently exists. So you shouldn't get the idea that you'll be able to enhance yourself tomorrow.
While philosophers have only been discussing this idea for the last decade or so, plenty of them have argued both for and against moral enhancement.
The basic argument for moral enhancement has been mentioned, namely, that we humans are inclined to certain moral failures, these failures can be corrected, and we have the ability to do so with technological interventions. Some thinkers, such as Julian Savulescu and Ingmar Persson, suggest that we have a moral imperative to do so, as the possibility for even a single person to cause widespread destruction is greater now than it has ever been.
On the other hand, some thinkers, like Allen Buchanan, suggest that while the problems that many proponents of moral enhancement want to solve are real, moral enhancement isn't likely to be a feasible solution to these problems.
Instead, these thinkers propose that non-medical interventions, such as adopting more progressive and accepting attitudes toward out-groups, have proven that our moral natures are not fixed and can be improved without technological intervention — even if the process is a little slow. They additionally have a few doubts about the feasibility or desirability of relying on technology to improve our morals and conclude that focusing on traditional methods is the better bet.
Of course, these are not mutually exclusive options, and it is possible that moral enhancement can be used in tandem with more traditional methods of making people more moral.
The many problems with moral enhancement
The problem of how to actually implement any technological solution remains unsolved. While some philosophers, including Dr. Fabiano, have developed frameworks to guide our use of this technology, there is no real consensus on it. This is a bit of a problem, as simplistic variations of moral enhancement, such as the use of chemical castration as a tool to try to reform sexual offenders, are already in use today in ways that are controversial.
Moral enhancement raises many other ethical questions. Which traits should be enhanced (or suppressed)? What are the side effects of taking a drug that alters your moral behavior? Should such treatments be required for some people, like violent criminals?
Ironically, there is even the chance that improving in-group cooperation, a possible excellent application of moral enhancement, could cause other problems. As Dr. Fabiano explains, "[T]here is a lot of empirical evidence indicating that a drug increasing cooperation between individuals would likely decrease cooperation between groups. Highly cooperative groups tend to be highly discriminatory. Such a drug would create more problems than it would solve."
On the other hand, the possible benefits of moral enhancement are obvious. People could become more cooperative, empathetic, or altusic without the years of work that our current moral improvement systems require. Problems we currently face could vanish in the face of an enhanced population. As Dr. Savulescu argues, this is enough of a benefit to make moral enhancement a worthwhile consideration.
If offered to you, would you take the pill?
Sometimes, moral lessons can be learned from blowing away zombies.
- Most video games are happily escapist entertainment, but some are much more.
- One of these is The Last of Us Part II (TLOU2), which takes place in a post-apocalyptic pandemic world.
- Through the innovative use of game play technology TLOU2, radically changes your perspective and elevates this game from entertainment to true art.
There are basically two kinds of people in the world: those who play (or played) video games and those who don't get video games at all.
Okay, I admit this might be an oversimplification. But for a 58-year-old guy who didn't start playing until about ten years ago, this bifurcation explains why so many people miss what is truly revolutionary in these revolutionary technologies. I find myself spending a lot of time explaining to my non-gamer friends (both young and old) that in the midst of all the alien shooters, battle royales, and side-scrolling melee fighters — FYI, these are game genres — there lies a radically potent new method for storytelling. And it's storytelling that provides one path by which a great video game can become great art. To illustrate this point, let me introduce The Last of Us Part II.
Released during COVID-19, The Last of Us Part II (TLOU2) tells a story in a world fallen to a pandemic. The subject matter certainly seems timely, but by itself, that doesn't mean much. Post-apocalyptic pandemic video games are a dime a dozen. There are a zillion titles out there that will let you spend 20 or 30 hours of game time mowing down zombies of one form or another while upgrading your weapons, health, and skills.
The sublime art of TLOU2
Now, don't get me wrong. The mowing down of zombies and the upgrading of skills common to many video games are just fine. Not every game has to be great art, just like not every movie you watch or novel you read has to be great art. There is, most definitely, a place in this world for mindless escape, entertainment, and fun. That's because — if you are into it — sneaking around some last-outpost-of-humanity while trying to take out dangerous zombies can be a delicious waste of time at the end of a hard day. But with TLOU2, there is all that and more.
The creators of TLOU2 take players on a difficult, exhausting journey through the consequences of violence.
Given the "Part II" in its title, TLOU2 is obviously the continuation of a story laid down in The Last of Us. That game followed Joel, a survival-hardened middle-aged smuggler who's been tasked with shepherding teenaged Ellie across the country 20 years after the pandemic outbreak. Ellie is immune to the infection that turns people into zombies. Joel is given his mission by a resistance group that hopes to use Ellie to find a final cure. The journey of Ellie and Joel (who lost his own teenaged daughter two decades earlier in the outbreak) is harrowing and makes The Last of Us almost universally recognized as one of the greatest video games ever made. I've written before about how TLOU's innovative use of game-playing mechanics redefined what was possible for storytelling. In TLOU2, creator Naughty Dog Studio manages to make lightning strike twice, finding an entirely new path to transformative innovation.
Warning! From here on there are serious spoilers. If you think you want to play these games STOP.
The Last of Us Part IICredit: Naughty Dog
You've been warned
TLOU2 takes place four years after the end of the original game. The story is set in motion with the brutal murder of Joel as Ellie is forced to watch. It's an act of vengeance, a retribution for Joel's own choices at the end of the first game. So, what does TLOU2 do to make this game rise above a thousand other stories of vengeance and retribution? The answer lies in the most basic mechanics of game play: perspective.
When you play a video game like TLOU2, you take on the role of the character. This means you literally take control of their actions, seeing through their eyes (or over their shoulder) as you navigate them through the world and the story. This is where the digital technologies of video games take storytelling into new domains. In the hands of lesser creators, the possibilities of that power are lost, and you just get another ho-hum shooter with a weak story. That's not what happens in TLOU2.
The first half of the game follows Ellie as she tracks down Joel's killer and seeks her own vengeance. Her quarry is Abby, the daughter of a doctor that Joel killed at the end of the first game. Abby is now part of a paramilitary group in Seattle, and you, playing as Ellie, must work your way through the city to find her over the course of three days. Using stealth and combat, fighting both the infected (really terrifying zombies) and Abby's compatriots, the effort is unnerving and exhausting. Unlike most games, TLOU2 does not let you off the hook in its depiction of violence. The brutality of what you are doing cannot be avoided. Characters struggle for their lives and call to each other by name if you take one down. They are friends, and you are the one ending that friendship forever.
The big plot twist
Which you are doing because, in a stunning design choice, TLOU2 switches that all-important perspective on you right in the middle of the game. With an impressive narrative mechanism, the clock gets reset to three days earlier, and you are now Abby, greeting one friend after another at the stadium that serves as the paramilitary group's base of operations. You get breakfast at the commissary and chat with folks in the line. You check out gear for the upcoming patrol and take responsibility for a playful guard dog named Alice.
As you move Abby through these often intimate interactions, you come to realize that these are all the people that you just murdered (including the dog) in the first half of the game when you were Ellie. It's a terrible, harrowing shift that colors the rest of the game as it goes on to unpack deeper issues about the strictures of our tribalism, our capacities for choice, and the possibilities of forgiveness. In the end, I was just blown away.
What matters for our discussion today is that the immense power of TLOU2 — namely, its ability to haunt me months after I finished the game — is due to the medium. Yes, a novel or film can force a change in perspective and that can be arresting. But it's the immersion, the agency, and the appearance of choice (even if limited) in video games that radically shifts the experience of perspective in a story. And in that shift comes a transcendence, a reframing, and a learning that are all the reasons why we turn to art. Ultimately, one reason we create art, one reason we participate in art, is an effort to learn something. Through it, we hope to find something deeper, something more about this mystery of being human.
That is what TLOU2 accomplishes. Through the medium of video games, the creators of TLOU2 take players on a difficult, exhausting journey through the consequences of violence. Given that medium's usual careless treatment of violence, making such a journey possible was not a small thing. It was revealing, and that is what we can, and should, ask from true art.
Instead of insisting that we remain "free from" government control, we should view taking vaccines and wearing masks as a "freedom to" be a moral citizen who protects the lives of others.
- Now that the vaccine is becoming widely available, why do so many insist on not taking it?
- As different episodes in history have illustrated — including the building of an atomic bomb in the U.S. – true freedom is to choose to place the well-being of your family, community, and country above your own personal values.
- We shouldn't confuse the privilege of choice with a threat to personal freedom. In threatening times, our best defense is to act together to the benefit of all.
Pandemic fatigue is beginning to grind. Amidst yet another pandemic wave cresting in America and in Europe, we have to ask ourselves what's going on, now that vaccines are becoming available. Americans are justly proud of living in a country where personal choices—political, religious, sexual—are supposedly free. I write "supposedly" because clearly there is widespread prejudice and judgement of others and their choices. Acceptance of differences and open-mindedness is still on the to-do list for many. Still, at least we don't have army tanks rolling down the streets when people demonstrate their political or social views. Not usually anyway. For comparison, look at what's happening in Myanmar.
What puzzles me is what could be called the ditching of privilege. I look, for example, at the situation in Brazil, where I was born and grew up. A huge shortage of vaccines and a government that has consistently downplayed the science has resulted in massive fatalities. People are clamoring for help while hospitals are nearing capacity. In the U.S., vaccines are becoming widely available for younger sectors of the population. In two to three months, we could reach herd immunity and life could be close to normal again. Yet, many are choosing not to take the vaccine or to wear masks. "It is my choice and no government should mess with it!" This kind of choice illustrates a confusing conflict between personal freedom and civic duty. When should you sacrifice your personal choices and views for the benefit of your family, community, and ultimately, country?
The choice to get a vaccine and to wear a mask is an expression of your freedom to be a moral citizen and to protect your family, community, and country.
I'm going to take a detour here and go back to another time when a group of individuals had to face a very difficult choice between personal views and civic duty. In 1941, the Japanese bombing of Pearl Harbor prompted the US to join the Allies in the war against Germany and Japan. Two years earlier, on 2-August-1939, Albert Einstein wrote a letter to President Franklin Roosevelt sounding the alarm of a very possible Nazi nuclear bomb. "In view of this situation you may think it desirable to have some permanent contact maintained between the Administration and the group of physicists working on chain reactions in America," Einstein wrote.
Now, Einstein was an outspoken pacifist, as were many of the physicists then working to understand nuclear chain reactions. When the Manhattan Project to build a U.S. atomic bomb started for real in 1942, the main worry and motivation for the group of scientists working in secrecy at Los Alamos was the fear of Hitler with a nuclear bomb in his hands. A split happened within the group. Some scientists pushed the moral worries of building a weapon of mass destruction aside and undertook the formidable technical challenge as another tough scientific problem to figure out. Others, however, had serious moral qualms in participating in the project, knowing very well what the social and political consequences would be. Still, they pushed their personal views aside and worked to build the bomb. The fear of a Nazi threat and the sense of civic duty, the need to protect their country, their community, their families, and their values took center stage, superseding their personal choice.
Choosing to place community and love for the nation over personal gain or values is what German social psychologist and humanistic philosopher Erich Fromm called "freedom to," as opposed to "freedom from." Fromm argued that the course of civilization and industrialization led citizens to an ever-growing process of individuation—the realization of your aloneness as an individual in a large society— where the weight of choosing for oneself became a heavy emotional burden. People that once saw themselves protected by their communities and religious faith were now set adrift by the very progress of democracy and capitalism. Freedom came with a heavy emotional cost. The consequence was the rise of fascist authoritarian governments that effectively chose for the individuals, giving them a sense of relief from the burden of choice.
Most people focus their battles in the "freedom from" category, confused between their individual freedom and their duties to community and country. The scientists that chose to continue working on the bomb against their personal values did so because they were not focusing on their individual choices above all others. They understood that the damage from the outside threat—a Nazi bomb—would have a devastating effect for their lives, families, communities, and country. So, they chose to work on the bomb to protect their freedom.
Let's apply this lesson to vaccines and mask-wearing. At face value, these seem to be personal choices. And if you see them as personal choices then you conclude that any action against your personal choice is a threat to your freedom from government control. But that's a fundamental mistake. The choice to get a vaccine and to wear a mask is an expression of your freedom to be a moral citizen and to protect your family, community, and country. The virus is the outside threat that has already compromised everyone's way of life, caused immense loss and pain, and wreaked havoc with the economy across the globe. By doing something for your family, community, and country you exercise your freedom to protect what's dear to you. This is what an act of love is.