Once a week.
Subscribe to our weekly newsletter.
Seek pleasure and avoid pain. Why make it more complicated?
- The Epicureans were some of the world's first materialists and argued that there is neither God, nor gods, nor spirits, but only atoms and the physical world.
- They believed that life was about finding pleasure and avoiding pain and that both were achieved by minimizing our desires for things.
- The Epicurean Four Step Remedy is advice on how we can face the world, achieve happiness, and not worry as much as we do.
Self-help books are consistently on the best-seller lists across the world. We can't seem to get enough of happiness advice, wellness gurus, and life coaches. But, as the Book of Ecclesiastes says, there is nothing new under the sun. The Ancient Greeks were into the self-help business millennia before the likes of Dale Carnegie and Mark Manson.
Four schools of ancient Greek philosophy
From the 3rd century BCE until the birth of Jesus, Greek philosophy was locked into an ideological war. Four rival schools emerged, each proclaiming loudly that they — alone — had the secret to a happy and fulfilled life. These schools were: Stoicism, Cynicism, Skepticism, and Epicureanism. Each had their advocates and even had a kind of PR battle to get people to sign up to their side. They were trying to sell happiness.
Epicurus's guide to living is noticeably different from a lot of modern self-help books in just how little day-to-day advice it gives.
Many of us are familiar with Stoicism, a topic I covered recently, because it forms the foundation of cognitive behavioral therapy. Skepticism and Cynicism have become watered down or warped variations of their original forms. (I will cover these in future articles.) Today, we focus on the most underappreciated of these schools, the Epicureans. In their philosophy, we can find a surprisingly modern and easy-to-follow "Four Part Remedy" to life.
Epicureans: The first atheists
The Epicureans were some of history's first materialists. They believed that the world was made up only of atoms (and void), and that everything is simply a particular composition of these atoms. There were no gods, spirits, or souls (or, at most, they're irrelevant to the world as we encounter it). They thought that there was no afterlife or immortality to be had, either. Death is just a relocation of atoms. This atheism and materialism was what the Christian Church would later come to despise, and after centuries of being villainized by priests, popes, and church doctrine, the Epicureans fell out of fashion.
In the atomistic, worldly philosophy of the Epicureans, all there is to life is to get as much pleasure as you can and avoid pain. This isn't to become some rampant hedonist, staggering from opium dens to brothels, but concerns the higher pleasures of the mind.
Epicurus, himself, believed that pleasure was defined as the satisfying of a desire, such as when we drink a glass of water when we're really thirsty. But, he also argued that desires themselves were painful since they, by definition, meant longing and anguish. Thirst is a desire, and we don't like being thirsty. True contentment, then, could not come from creating and indulging pointless wants but must instead come from minimizing desire altogether. What would be the point of setting ourselves new targets? These are just new desires that we must make efforts to satisfy. Thus, minimizing pain meant minimizing desires, and the bare minimum desires were those required to live.
The Four Part Remedy
Given that Epicureans were determined to maximize pleasure and minimize pain, they developed a series of rituals and routines designed to help. One of the best known (not least because we've lost so much written by the Epicureans) was the so-called "Four Part Remedy." These were four principles they believed we ought to accept so that we might find solace and be rid of existential and spiritual pain:
1. Don't fear God. Remember, everything is just atoms. You won't go to hell, and you won't go to heaven. The "afterlife" will be nothingness, in just the same way as when you had no awareness whatsoever of the dinosaurs or Cleopatra. There was simply nothing before you existed, and death is a great expanse of the same timeless, painless void.
2. Don't worry about death. This is a natural corollary of Step 1. With no body, there is no pain. In death, we lose all of our desires and, along with them, suffering and discontent. It's striking how similar in tone this sounds to a lot of Eastern, especially Buddhist, philosophy at the time.
3. What is good is easy to get. Pleasure comes in satisfying desires, specifically the basic, biological desires required to keep us alive. Anything more complicated than this, or harder to achieve, just creates pain. There's water to be drunk, food to be eaten, and beds to sleep in. That's all you need.
4. What is terrible is easy to endure. Even if it is difficult to satisfy the basic necessities, remember that pain is short-lived. We're rarely hungry for long, and sicknesses most often will be cured easily enough (and this was written 2300 years before antibiotics). All other pains often can be mitigated by pleasures to be had. If basic biological necessities can't be met, then you die — but we already established there is nothing to fear from death.
Epicurus's guide to living is noticeably different from a lot of modern self-help books in just how little day-to-day advice it gives. It doesn't tell us "the five things you need to do before breakfast" or "visit these ten places, and you'll never be sad again." Just like it's rival school of Stoicism, Epicureanism is all about a psychological shift of some kind.
Namely, that psychological shift is about recognizing that life doesn't need to be as complicated as we make it. At the end of the day, we're just animals with basic needs. We have the tools necessary to satisfy our desires, but when we don't, we have huge reservoirs of strength and resilience capable of enduring it all. Failing that, we still have nothing to fear because there is nothing to fear about death. When we're alive, death is nowhere near; when we're dead, we won't care.
Practical, modern, and straightforward, Epicurus offers a valuable insight to life. It's existential comfort for the materialists and atheists. It's happiness in four lines.
- The history of AI shows boom periods (AI summers) followed by busts (AI winters).
- The cyclical nature of AI funding is due to hype and promises not fulfilling expectations.
- This time, we might enter something resembling an AI autumn rather than an AI winter, but fundamental questions remain if true AI is even possible.
The dream of building a machine that can think like a human stretches back to the origins of electronic computers. But ever since research into artificial intelligence (AI) began in earnest after World War II, the field has gone through a series of boom and bust cycles called "AI summers" and "AI winters."
Each cycle begins with optimistic claims that a fully, generally intelligent machine is just a decade or so away. Funding pours in and progress seems swift. Then, a decade or so later, progress stalls and funding dries up. Over the last ten years, we've clearly been in an AI summer as vast improvements in computing power and new techniques like deep learning have led to remarkable advances. But now, as we enter the third decade of the 21st century, some who follow AI feel the cold winds at their back leading them to ask, "Is Winter Coming?" If so, what went wrong this time?
How to build an A.I. brain that can conceive of itself | Joscha Bach | Big Think www.youtube.com
A brief history of AI
To see if the winds of winter are really coming for AI, it is useful to look at the field's history. The first real summer can be pegged to 1956 and the famous Dartmouth University Workshop where one of the field's pioneers, John McCarthy, coined the term "artificial intelligence." The conference was attended by scientists like Marvin Minsky and H. A. Simon, whose names would go on to become synonymous with the field. For those researchers, the task ahead was clear: capture the processes of human reasoning through the manipulation of symbolic systems (i.e., computer programs).
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Throughout the 1960s, progress seemed to come swiftly as researchers developed computer systems that could play chess, deduce mathematical theorems, and even engage in simple discussions with a person. Government funding flowed generously. Optimism was so high that, in 1970, Minsky famously proclaimed, "In three to eight years we will have a machine with the general intelligence of a human being."
By the mid 1970s, however, it was clear that Minsky's optimism was unwarranted. Progress stalled as many of the innovations of the previous decade proved too narrow in their applicability, seeming more like toys than steps toward a general version of artificial intelligence. Funding dried up so completely that researchers soon took pains not to refer to their work as AI, as the term carried a stink that killed proposals.
The cycle repeated itself in the 1980s with the rise of expert systems and the renewed interest in what we now call neural networks (i.e., programs based on connectivity architectures that mimic neurons in the brain). Once again, there was wild optimism and big increases in funding. What was novel in this cycle was the addition of significant private funding as more companies began to rely on computers as essential components of their business. But, once again, the big promises were never realized, and funding dried up again.
AI: Hype vs. reality
The AI summer we're currently experiencing began sometime in the first decade of the new millennium. Vast increases in both computing speed and storage ushered in the era of deep learning and big data. Deep learning methods use stacked layers of neural networks that pass information to each other to solve complex problems like facial recognition. Big data provides these systems with vast oceans of examples (like images of faces) to train on. The applications of this progress are all around us: Google Maps give you near-perfect directions; you can talk with Siri anytime you want; IBM's Deep Think computer beat Jeopardy's greatest human champions.
In response, the hype rose again. True AI, we were told, must be just around the corner. In 2015, for example, The Guardian reported that self-driving cars, the killer app of modern AI, was close at hand. Readers were told, "By 2020 you will become a permanent backseat driver." And just two years ago, Elon Musk claimed that by 2020 "we'd have over a million cars with full self-driving software."
The general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
By now, it's obvious that a world of fully self-driving cars is still years away. Likewise, in spite of the remarkable progress we've made in machine learning, we're still far from creating systems that possess general intelligence. The emphasis is on the term general because that's what AI really has been promising all these years: a machine that's flexible in dealing with any situation as it comes up. Instead, what researchers have found is that, despite all their remarkable progress, the systems they've built remain brittle, which is a technical term meaning "they do very wrong things when given unexpected inputs." Try asking Siri to find "restaurants that aren't McDonald's." You won't like the results.
Unless we are talking about very specific tasks, any 6-year-old is infinitely more flexible and general in his or her intelligence than the "smartest" Amazon robot.
Even more important is the sense that, as remarkable as they are, none of the systems we've built understand anything about what they are doing. As philosopher Alva Noe said of Deep Think's famous Jeopardy! victory, "Watson answered no questions. It participated in no competition. It didn't do anything. All the doing was on our side. We played Jeapordy! with Watson." Considering this fact, some researchers claim that the general intelligence — i.e., the understanding — we humans exhibit may be inseparable from our experiencing. If that's true, then our physical embodiment, enmeshed in a context-rich world, may be difficult if not impossible to capture in symbolic processing systems.
Not the (AI) winter of our discontent
Thus, talk a of a new AI winter is popping up again. Given the importance of deep learning and big data in technology, it's hard to imagine funding for these domains drying up any time soon. What we may be seeing, however, is a kind of AI autumn when researchers wisely recalibrate their expectations and perhaps rethink their perspectives.
A new study explores how investors' behavior is affected by participating in online communities, like Reddit's WallStreetBets.
- The study found evidence that "hype" over assets is psychologically contagious among investors in online communities.
- This hype is self-perpetuating: A small group of investors hypes an asset, bringing in new investors, until growth becomes unsteady and a price crash ensues.
- The researchers suggested that these new kinds of self-organized, social media-driven investment behaviors are unlikely to disappear anytime soon.
Social media has reshaped human behavior in ways we're only starting to understand. The proliferation of online communities has helped spawn novel strategies for promoting political causes, conducting business, finding sex and love, and transforming culture.
Could online communities also transform behavior in the financial world?
That's one of the key questions explored in a new study published on the preprint server arXiv. Titled "Reddit's self-organised bull runs: Social contagion and asset prices," the study used discussion data from the subreddit WallStreetBets to analyze relationships between the price of stocks and "hype" among online retail investors.
Hype is nothing new in the investing world. But the researchers noted that there seems to be something novel about the short squeeze of GameStop's stock in January, when the price of the stock rose tenfold, thanks largely to self-organized retail investors from WallStreetBets.
"As academics and regulators alike grapple with the implications, many wonder whether large-scale coordination among retail investors is the new 'modus operandi,' or a one-off fluke," the researchers wrote. "We argue that this is a new manifestation of a well-established global phenomenon."
To better understand how online hype is associated with stock prices, the researchers focused on two social components of hype: contagion and consensus. Contagion refers to investors spreading interest in an asset among each other, while consensus refers to their ability to agree on whether to buy or sell an asset.
The analysis found empirical evidence that both contagion and consensus emerge in online communities like WallStreetBets. In other words, investors spread sentiments about future stock performance to other investors, and then they cohere around investment strategies.
Popularity over fundamentals
The findings suggest that an asset's popularity, not its fundamentals, is paramount to many investors.
"Our results consistently show that investors become interested in discussing an asset, not because of fundamentals, but because other users discuss it," the researchers wrote. "Subsequently, this paper tests whether an individual's sentiment about future asset performance [is] affected by those of others. We find that this is the case: people look to their peers to form an opinion about an asset's potential."
To find evidence for social contagion among online investors, the researchers compiled a large dataset of posts and comments submitted to WallStreetBets. The goal was to analyze whether investors' past comments or posts about a given stock, such as Tesla, had a predictable effect on future discussions of that asset within WallStreetBets.
After conducting a regression analysis, the results suggest that hype is socially contagious and cyclical. The cycle usually plays out like this: A small group of investors hypes an asset. This attracts a larger group of investors who join the discussions.
But eventually, too many investors have joined the discussion, and fewer new investors are buying into the hype. As investors lose interest, they spend less time discussing (or "spreading") the asset on the forum, and they turn to new opportunities. The process is similar to a virus: As enough people become infected, they reach herd immunity, and the virus (hype) dies out.
So, does this process affect the stock price, and if so, how? The researchers said it was difficult to establish causality between hype and actual market activity. After all, they didn't have access to the trading records of subscribers to WallStreetBets.
But their model did show that activity on WallStreetBets was able to explain "significant variance" in trading volumes for the most-discussed assets on the forum. This suggests that when social contagion is strong for a given asset, consensus is strong too.
On the stock chart, consensus may start off bullish (or positively): As hype spreads, there's a slow, steady run-up in price. But the growth eventually becomes unstable and is followed by a crash and a period of volatility.
"The price crash stems from panic selling, as investors turn nervous in the face of volatility," the researchers wrote.
Bad news spreads faster than good news
Interestingly, the analysis found that bearish (or negative) sentiments were significantly more contagious on WallStreetBets.
"The data demonstrates that authors who previously commented on a bearish post are 47.7% more likely to express bearish over neutral sentiments, and 18.1% less likely to express bullish sentiments over neutral sentiments. Similarly, but less markedly, authors who previously commented on at least one bullish submission are 9.4% more likely to write a bullish submission, yet 11.3% less likely to write a bearish one."
The researchers said that the changing investing climate and widely available online data offers "promising opportunities for future research."
"As social media galvanizes a larger pool of retail investors with the potential for exciting stock market gambles, it is crucial to understand how social dynamics can impact asset prices," the researchers wrote. "With the first publicly acclaimed victory of Main Street over Wall Street, in the form of the GameStop short squeeze, it is unlikely that socially-driven asset volatility will simply disappear."
A new study used functional near-infrared spectroscopy (fNIRS) to measure brain activity as inexperienced and experienced soccer players took penalty kicks.
- The new study is the first to use in-the-field imaging technology to measure brain activity as people delivered penalty kicks.
- Participants were asked to kick a total of 15 penalty shots under three different scenarios, each designed to be increasingly stressful.
- Kickers who missed shots showed higher activity in brain areas that were irrelevant to kicking a soccer ball, suggesting they were overthinking.
In a 2019 soccer match, Swansea City was down 1-0 against West Brom late in the first half. A penalty was called against West Brom. Swansea midfielder Bersant Celina was preparing to deliver a penalty kick. He scuttled up to the ball, but his foot only made partial contact, lobbing it weakly to the right.
Was it a simple mistake? Maybe. But there might be deeper explanations for why professional athletes choke under high-pressure situations.
A new study published in Frontiers in Computer Science used functional near-infrared spectroscopy (fNIRS) to analyze the brain activity of inexperienced and experienced soccer players as they missed penalty shots. Although past research has explored why soccer players miss penalty shots, the recent study is the first to do so using in-the-field fNIRS measurement.
The results showed that kickers who choked were activating parts of their brain associated with long-term thinking, self-instruction, and self-reflection. The chokers, in other words, were overthinking it.
The psychology of penalty kicks
Penalty shots offer an interesting case study of how mental pressure affects physical performance. After all, there's a lot at stake, not only because the kick can sometimes render a win or loss, but also because there are sometimes millions of people anxiously watching, some of whom might have a financial interest in the outcome.
That pressure is no joke. For example, research on Men's World Cup penalty shoot-outs has shown that when the score is tied and a goal means an immediate win, players score 92 percent of kicks. But when teams are facing elimination in a shootout, and the kick determines an immediate tie or loss, players only score 60 percent of the time.
"How can it be that football players with a near perfect control over the ball (they can very precisely kick a ball over more than 50 meters) fail to score a penalty kick from only 11 meters?" study co-author Max Slutter, of the University of Twente in the Netherlands, said in a press release.
"Obviously, huge psychological pressure plays a role, but why does this pressure cause a missed penalty? We tried to answer this by measuring the brain activity of football players during the physical execution of a penalty kick."
In the new study, the researchers aimed to answer two key questions about choking under pressure among both experienced and inexperienced players: (1) What is the difference in brain activity between success (scoring) and failure (missing) when taking a penalty kick? (2) What brain activity is associated with performing under pressure during a penalty kick situation?
To find out, the researchers asked ten experienced soccer players and twelve inexperienced players to participate in a penalty-kicking task. The task was divided into three rounds, each of which was designed to be increasingly stressful:
- Round 1 had no goalkeeper and was labeled as a practice round.
- Round 2 had a friendly goalkeeper who wasn't allowed to distract the kicker.
- Round 3 had a competitive goalkeeper who was allowed to distract the kicker, and kickers were also competing for a prize.
Participants kicked five shots in each round. They wore a fNIRS-equipped headset during the task that measured activity in various parts of the brain.
All participants performed worse in the second and third rounds and reported experiencing the most pressure in the third round. Inexperienced players performed worse than experienced players, which might suggest that they were less able to deal with the mental stress.
The locations in which experienced and inexperienced players kicked the ball in each round. Red dots represent missed penalties and green dots represent scored penalties.Slutter et al., Frontiers in Computer Science, 2021.
The neuroscience of choke artists
So, what types of brain activity were associated with missed shots?
The most noticeable result was that kickers missed more shots when they showed higher activity in their prefrontal cortex (PFC), an area of the brain associated with long-term planning. This was especially true among participants who reported higher levels of anxiety. More specifically, experienced soccer players who missed shots showed high activity in the left temporal cortex, which is related to self-instruction and self-reflection.
"By activating the left temporal cortex more, experienced players neglect their automated skills and start to overthink the situation," the researchers wrote. "This increase can be seen as a distracting factor."
Also, when players of all experience levels felt anxious and missed shots, they showed less activity in the motor cortex, which is the brain area most directly associated with kicking a penalty shot.
Don't overthink it
The results suggest that mental pressure can activate parts of the brain that are irrelevant to the task at hand. In general, expert athletes show more efficient brain activity — that is, more activity in relevant areas, and less activity in irrelevant areas — and therefore experience fewer distractions. This is likely one reason why they were more successful at penalties than inexperienced players in high-stress situations.
This principle is described by neural efficiency theory, and it applies not only to athletes but experts in any field. As you gain mastery over something, you can rely more on automatic brain processes rather than deliberate thinking, which can lead to distractions. The authors of the study concluded that their results provide supporting evidence for neural efficiency theory.
Still, as long our experts are human, it seems that high-pressure situations can turn anyone into a choke artist.
What's the difference between brainwashing and rehabilitation?
- The book and movie, A Clockwork Orange, powerfully asks us to consider the murky lines between rehabilitation, brainwashing, and dehumanization.
- There are a variety of ways, from hormonal treatment to surgical lobotomies, to force a person to be more law abiding, calm, or moral.
- Is a world with less free will but also with less suffering one in which we would want to live?
Alex is a criminal. A violent and sadistic criminal. So, we decide to do something about it. We're going to "rehabilitate" him.
Using a new and exciting "Ludovico" technique, we'll change his brain chemistry to make him an upstanding, moral citizen. Alex will be forced to watch violent movies as his body is pumped with nausea-inducing drugs. After a while, he'll come to associate violence with this horrible sickness. And, after a course of Ludovico, Alex can happily return to society, never again doing an immoral or illegal act. He'll no longer be a danger to himself or anyone else.
This is the story of A Clockwork Orange by Anthony Burgess, and it raises important questions about the nature of moral decisions, free will, and the limits of rehabilitation.
Today's Clockwork Orange
This might seem like unbelievable science fiction, but it might be truer — and nearer — than we think. In 2010, Dr. Molly Crockett did a series of experiments on moral decision-making and serotonin levels. Her results showed that people with more serotonin were less aggressive or confrontational and much more easy-going and forgiving. When we're full of serotonin, we let insults pass, are more empathetic, and are less willing to do harm.
As Fydor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket."
The idea that biology affects moral decisions is obvious. Most of us are more likely to be short-tempered and spiteful if we're tired or hungry, for instance. Conversely, we have the patience of a saint if we just have received some good news, had half a bottle of wine, or had sex.
If our decision-making can be manipulated or determined by our biology, should we not try various interventions to prevent the criminally inclined from harming others?
What is the point of prison? This is itself no easy question, and it's one with a rich philosophical debate. Surely one of the biggest reasons is to protect society by preventing criminals from reoffending. This might be achievable by manipulating a felon's serotonin levels, but why not go even further?
Today, we know enough about the brain to have identified a very particular part of the prefrontal cortex responsible for aggressive behavior. We know that certain abnormalities in the amygdala can result in anti-social behavior and rule breaking. If the purpose of the penal system is to rehabilitate, then why not "edit" these parts of the brain in some way? This could be done in a variety of ways.
Credit: Otis Historical Archives National Museum of Health and Medicine via Flickr / Wikipedia
Electroconvulsive therapy (ECT) is a surprisingly common practice in much of the developed world. Its supporters say that it can help relieve major mental health issues such as depression or bipolar disorder as well as alleviate certain types of seizures. Historically, and controversially, it has been used to "treat" homosexuality and was used to threaten those misbehaving in hospitals in the 1950s (as notoriously depicted in One Flew Over the Cuckoo's Nest). Of course, these early and crude efforts at ECT were damaging, immoral, and often left patients barely able to function as humans. Today, neuroscience and ECT are much more sophisticated. If we could easily "treat" those with aggressive or anti-social behavior, then why not?
Ideally, we might use techniques such as ECT or hormonal supplementation, but failing that, why not go even further? Why not perform a lobotomy? If the purpose of the penal system is to change the felon for the better, we should surely use all the tools at our disposal. With one fairly straightforward surgery to the prefrontal cortex, we could turn a violent, murderous criminal into a docile and law-abiding citizen. Should we do it?
Is free will worth it?
As Burgess, who penned A Clockwork Orange, wrote, "Is a man who chooses to be bad perhaps in some way better than a man who has the good imposed upon him?"
Intuitively, many say yes. Moral decisions must, in some way, be our own. Even if we know that our brains determine our actions, it's still me who controls my brain, no one else. Forcing someone to be good, by molding or changing their brain, is not creating a moral citizen. It's creating a law-abiding automaton. And robots are not humans.
And yet, it begs the question: is "free choice" worth all the evil in the world?
If my being brainwashed or "rehabilitated" means children won't die malnourished or the Holocaust would never happen, then so be it. If lobotomizing or neuro-editing a serial killer will prevent them from killing again, is that not a sacrifice worth making? There's no obvious reason why we should value free will above morality or the right to life. A world without murder and evil — even if it meant a world without free choices for some — might not be such a bad place.
As Fyodor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket." Free will's not worth it.
Do you think the Ludovico technique from A Clockwork Orange is a great idea? Should we turn people into moral citizens and shape their brains to choose only what is good? Or is free choice more important than all the evil in the world?
- Lawrence Kohlberg's experiments gave children a series of moral dilemmas to test how they differed in their responses across various ages.
- He identified three separate stages of moral development from the egoist to the principled person.
- Some people do not progress through all the stages of moral development, which means they will remain "morally undeveloped."
Has your sense of right and wrong changed over the years? Are there things that you see as acceptable today that you'd never dream of doing when you were younger? If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
These were questions that Lawrence Kohlberg asked, and his "stages of moral development" dominates a lot of moral psychology today.
The Heinz Dilemma
Kohlberg was curious to see how and why children differed in their ethical judgements, and so he gave roughly 60 children, across a variety of ages, a series of moral dilemmas. They were all given open-ended questions to explain their answers in order to minimize the risk of leading them to a certain response.
For instance, one of the better-known dilemmas involved an old man called Heinz who needed an expensive drug for his dying wife. Heinz only managed to raise half the required money, which the pharmacists wouldn't accept. Unable to afford it, he has only three options. What should he do?
(a) Not steal it because it's breaking the law.
(b) Steal it, and go to jail for breaking the law.
(c) Steal it, but be let off a prison sentence.
What option would you choose?
Stages of Moral Development
From the answers he got, Kohlberg identified three definite levels or stages of our moral development.
Pre-conventional stage. This is characterized by an ego-centric attitude that seeks pleasure and to prevent pain. The primary motivation is to avoid punishment or claim a reward. In this stage of moral development, "good" is defined as whatever is beneficial to oneself. "Bad" is the opposite. For instance, a young child might share their food with a younger sibling not from kindness or some altruistic impulse but because they know that they'll be praised by their parents (or, perhaps, have their food taken away from them).
In the pre-conventional stage, there is no inherent sense of right and wrong, per se, but rather "good" is associated with reward and "bad" is associated with punishment. At this stage, children are sort of like puppies.
If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
Conventional stage. This stage reflects a growing sense of social belonging and hence a higher regard for others. Approval and praise are seen as rewards, and behavior is calibrated to please others, obey the law, and promote the good of the family/tribe/nation. In the conventional stage, a person comes to see themselves as part of a community and that their actions have consequences.
Consequently, this stage is much more rule-focused and comes along with a desire to be seen as good. Image, reputation, and prestige matter the most in motivating good behavior — we want to fit into our community.
Post-conventional stage. In this final stage, there is much more self-reflection and moral reasoning, which gives people the capacity to challenge authority. Committing to principles is considered more important than blindly obeying fixed laws. Importantly, a person comes to understand the difference between what is "legal" and what is "right." Ideas such as justice and fairness start to mature. Laws or rules are no longer equated to morality but might be seen as imperfect manifestations of larger principles.
A lot of moral philosophy is only possible in the post-conventional stage. Theories like utilitarianism or Immanuel Kant's duty-focused ethics ask us to consider what's right or wrong in itself, not just because we get a reward or look good to others. Aristotle perhaps sums it up best when he wrote, "I have gained this from philosophy: that I do without being commanded what others do only from fear of the law."
How morally developed are you?
Kohlberg identified these stages as a developmental progression from early infancy all the way to adulthood, and they map almost perfectly onto Jean Piaget's psychology of child development. For instance, the pre-conventional stage usually lasts from birth to roughly nine years old, the conventional occurs mainly during adolescence, and the post-conventional goes into adulthood.
What's important to note, though, is that this is not a fatalistic timetable to which all humans adhere. Kohlberg thought, for instance, that some people never progress or mature. It's quite possible, maybe, for someone to have no actual moral compass at all (which is sometimes associated with psychopathy).
More commonly, though, we all know people who are resolutely bound to the conventional stage, where they care only for their image or others' judgment. Those who do not develop beyond this stage are usually stubbornly, even aggressively, strict in following the rules or the law. Prepubescent children can be positively authoritarian when it comes to obeying the rules of a board game, for instance.
So, what's your answer to the Heinz dilemma? Where do you fall on Kohlberg's moral development scale? Is he right to view it is a progressive, hierarchical maturing, where we have "better" and "worse" stages? Or could it be that as we grow older, we grow more immoral?