Once a week.
Subscribe to our weekly newsletter.
Is working from home the ultimate liberation or the first step toward an even unhappier "new normal"?
- The Great Resignation is an idea proposed by Professor Anthony Klotz that predicts a large number of people leaving their jobs after the COVID pandemic ends and life returns to "normal."
- French philosopher Michel Foucault argued that by establishing what is and is not "normal," we are exerting a kind of power by making people behave a certain way.
- If working from home becomes the new normal, we must be careful that it doesn't give way to a new lifestyle that we hate even more than the office.
You wake up, you put on your work clothes, and you go to the office. You sit behind a desk, or in some designated space, and you work until the clock says it's over. This is what life is like for the vast majority of people. That is, until COVID came along. Then, everything changed.
Recently, an interesting idea has emerged called the "Great Resignation." This is a phenomenon that Professor Anthony Klotz of Texas A&M University has predicted will happen when people are asked, or told, to return to their offices. Klotz argues that, when we're all forced back into the old reality of the commute, a nine-to-five job, and cubicle life, there will be a "Great Resignation" among the workforce.
The argument is that in times of uncertainty and insecurity — like during a global pandemic — people behave conservatively. They'll stay put. But once things "normalize" again, we ought to expect employees to head for the exits.
But why? What has changed? Why has working from home made us so dissatisfied with our previously normal lives? Other than the comfort and convenience of working from home, one explanation might involve the concept of "normalization," a topic that fascinated French philosopher Michel Foucault.
The power of normal people
Foucault argued that we often spend an inordinate amount of time trying to be normal. We must dress the same way as everyone else. We must talk about the same things. We must work just like everyone else works. It's hugely important that things are normal. But, behind all of this, is a power dynamic that many of us are simply unaware of — and unconsciously unhappy about.
Someone, somewhere, must define what is "normal." It is then for the rest of us to bend over backward to fit into this narrow mold. To be powerful, then, is to say, "Do this, otherwise everyone will call you weird." Power is to hold the hoops everyone else must jump through. It's what Foucault describes as "normalizing power."
COVID was a wake-up call to the abnormality of modern work
Let's apply Focault's normalization concept to the modern workplace. Accepted wisdom had it that the best — and really, the only way — to work was in an office, usually downtown, far away from where we live. We were told this is where collaboration and creativity occur. Largely unchallenged, this "normal" functioned for decades, and we all obeyed.
We had to wake up at the crack of dawn to get ready for work. We had to travel in clogged and joyless commutes. We had to eat ready-packaged lunches behind our too-small desks. We had to sit through meetings in "good posture" ergonomic chairs that wouldn't be out of place in the Spanish Inquisition. Then we had to travel back home in yet another clogged and joyless commute. And we did this day after day after day.
Then COVID came along and revealed just how artificial, unnecessary, and abnormal it all is. It's as if someone ripped a blindfold off of society. We have laptops, wi-fi, and 5G (at least when people aren't burning the towers down). Many of us were just as productive — if not more so — than during the "normal" pre-COVID era. We don't need to be in an office. We don't need to waste countless hours of our lives sitting in traffic.
While the idea of a Great Resignation is quite appealing right now, we should be careful the "new normal" isn't so much worse.
Even better, people got to spend more time with their families, enjoy long and restful breaks, and have space to pursue their hobbies. In short, people like not going to an office. And, as Klotz argues, when companies see this dissatisfaction — this Great Resignation — they're going to ask some revolutionary questions, like, "Do you want to come back full time? Work remotely? In-office three days a week? Four days? One day?"
The silver lining to the COVID pandemic is that it has made us re-examine what "normal" is.
Beware the new normal
Of course, the idea of a nine-to-five office job was not established by some moustache-twirling villain just to satisfy his sadistic whims. It came about because people thought that was the most effective and productive way to operate.
People do need direct human contact, and it's often easier and more productive to speak to a colleague next to you or walk across an office to ask for some help. Remote-working software like Zoom is indeed convenient, but can a company honestly say that it's as efficient as working in an office?
What's more, there's a particularly pernicious sting in what Foucault argued. It's something that ought to slow any would-be Great Resignation. This is the idea that there likely will always be some kind of normal.
While COVID has revealed the office for the normalized power play that it is, what's to say what the next "normal" will be? Let's say that working from home becomes the new normal. Will we be expected to attend Zoom meetings at any hour of the day or answer text messages at midnight? Might cameras be used to monitor our every movement? Might software check that we're working at the right pace and in the right way?
While the idea of a Great Resignation is quite appealing right now, we should be careful the "new normal" isn't so much worse.
Seek pleasure and avoid pain. Why make it more complicated?
- The Epicureans were some of the world's first materialists and argued that there is neither God, nor gods, nor spirits, but only atoms and the physical world.
- They believed that life was about finding pleasure and avoiding pain and that both were achieved by minimizing our desires for things.
- The Epicurean Four Step Remedy is advice on how we can face the world, achieve happiness, and not worry as much as we do.
Self-help books are consistently on the best-seller lists across the world. We can't seem to get enough of happiness advice, wellness gurus, and life coaches. But, as the Book of Ecclesiastes says, there is nothing new under the sun. The Ancient Greeks were into the self-help business millennia before the likes of Dale Carnegie and Mark Manson.
Four schools of ancient Greek philosophy
From the 3rd century BCE until the birth of Jesus, Greek philosophy was locked into an ideological war. Four rival schools emerged, each proclaiming loudly that they — alone — had the secret to a happy and fulfilled life. These schools were: Stoicism, Cynicism, Skepticism, and Epicureanism. Each had their advocates and even had a kind of PR battle to get people to sign up to their side. They were trying to sell happiness.
Epicurus's guide to living is noticeably different from a lot of modern self-help books in just how little day-to-day advice it gives.
Many of us are familiar with Stoicism, a topic I covered recently, because it forms the foundation of cognitive behavioral therapy. Skepticism and Cynicism have become watered down or warped variations of their original forms. (I will cover these in future articles.) Today, we focus on the most underappreciated of these schools, the Epicureans. In their philosophy, we can find a surprisingly modern and easy-to-follow "Four Part Remedy" to life.
Epicureans: The first atheists
The Epicureans were some of history's first materialists. They believed that the world was made up only of atoms (and void), and that everything is simply a particular composition of these atoms. There were no gods, spirits, or souls (or, at most, they're irrelevant to the world as we encounter it). They thought that there was no afterlife or immortality to be had, either. Death is just a relocation of atoms. This atheism and materialism was what the Christian Church would later come to despise, and after centuries of being villainized by priests, popes, and church doctrine, the Epicureans fell out of fashion.
In the atomistic, worldly philosophy of the Epicureans, all there is to life is to get as much pleasure as you can and avoid pain. This isn't to become some rampant hedonist, staggering from opium dens to brothels, but concerns the higher pleasures of the mind.
Epicurus, himself, believed that pleasure was defined as the satisfying of a desire, such as when we drink a glass of water when we're really thirsty. But, he also argued that desires themselves were painful since they, by definition, meant longing and anguish. Thirst is a desire, and we don't like being thirsty. True contentment, then, could not come from creating and indulging pointless wants but must instead come from minimizing desire altogether. What would be the point of setting ourselves new targets? These are just new desires that we must make efforts to satisfy. Thus, minimizing pain meant minimizing desires, and the bare minimum desires were those required to live.
The Four Part Remedy
Given that Epicureans were determined to maximize pleasure and minimize pain, they developed a series of rituals and routines designed to help. One of the best known (not least because we've lost so much written by the Epicureans) was the so-called "Four Part Remedy." These were four principles they believed we ought to accept so that we might find solace and be rid of existential and spiritual pain:
1. Don't fear God. Remember, everything is just atoms. You won't go to hell, and you won't go to heaven. The "afterlife" will be nothingness, in just the same way as when you had no awareness whatsoever of the dinosaurs or Cleopatra. There was simply nothing before you existed, and death is a great expanse of the same timeless, painless void.
2. Don't worry about death. This is a natural corollary of Step 1. With no body, there is no pain. In death, we lose all of our desires and, along with them, suffering and discontent. It's striking how similar in tone this sounds to a lot of Eastern, especially Buddhist, philosophy at the time.
3. What is good is easy to get. Pleasure comes in satisfying desires, specifically the basic, biological desires required to keep us alive. Anything more complicated than this, or harder to achieve, just creates pain. There's water to be drunk, food to be eaten, and beds to sleep in. That's all you need.
4. What is terrible is easy to endure. Even if it is difficult to satisfy the basic necessities, remember that pain is short-lived. We're rarely hungry for long, and sicknesses most often will be cured easily enough (and this was written 2300 years before antibiotics). All other pains often can be mitigated by pleasures to be had. If basic biological necessities can't be met, then you die — but we already established there is nothing to fear from death.
Epicurus's guide to living is noticeably different from a lot of modern self-help books in just how little day-to-day advice it gives. It doesn't tell us "the five things you need to do before breakfast" or "visit these ten places, and you'll never be sad again." Just like it's rival school of Stoicism, Epicureanism is all about a psychological shift of some kind.
Namely, that psychological shift is about recognizing that life doesn't need to be as complicated as we make it. At the end of the day, we're just animals with basic needs. We have the tools necessary to satisfy our desires, but when we don't, we have huge reservoirs of strength and resilience capable of enduring it all. Failing that, we still have nothing to fear because there is nothing to fear about death. When we're alive, death is nowhere near; when we're dead, we won't care.
Practical, modern, and straightforward, Epicurus offers a valuable insight to life. It's existential comfort for the materialists and atheists. It's happiness in four lines.
- Lawrence Kohlberg's experiments gave children a series of moral dilemmas to test how they differed in their responses across various ages.
- He identified three separate stages of moral development from the egoist to the principled person.
- Some people do not progress through all the stages of moral development, which means they will remain "morally undeveloped."
Has your sense of right and wrong changed over the years? Are there things that you see as acceptable today that you'd never dream of doing when you were younger? If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
These were questions that Lawrence Kohlberg asked, and his "stages of moral development" dominates a lot of moral psychology today.
The Heinz Dilemma
Kohlberg was curious to see how and why children differed in their ethical judgements, and so he gave roughly 60 children, across a variety of ages, a series of moral dilemmas. They were all given open-ended questions to explain their answers in order to minimize the risk of leading them to a certain response.
For instance, one of the better-known dilemmas involved an old man called Heinz who needed an expensive drug for his dying wife. Heinz only managed to raise half the required money, which the pharmacists wouldn't accept. Unable to afford it, he has only three options. What should he do?
(a) Not steal it because it's breaking the law.
(b) Steal it, and go to jail for breaking the law.
(c) Steal it, but be let off a prison sentence.
What option would you choose?
Stages of Moral Development
From the answers he got, Kohlberg identified three definite levels or stages of our moral development.
Pre-conventional stage. This is characterized by an ego-centric attitude that seeks pleasure and to prevent pain. The primary motivation is to avoid punishment or claim a reward. In this stage of moral development, "good" is defined as whatever is beneficial to oneself. "Bad" is the opposite. For instance, a young child might share their food with a younger sibling not from kindness or some altruistic impulse but because they know that they'll be praised by their parents (or, perhaps, have their food taken away from them).
In the pre-conventional stage, there is no inherent sense of right and wrong, per se, but rather "good" is associated with reward and "bad" is associated with punishment. At this stage, children are sort of like puppies.
If you spend time around children, do you notice how starkly different their sense of morality is? How black and white, or egocentric, or oddly rational it can be?
Conventional stage. This stage reflects a growing sense of social belonging and hence a higher regard for others. Approval and praise are seen as rewards, and behavior is calibrated to please others, obey the law, and promote the good of the family/tribe/nation. In the conventional stage, a person comes to see themselves as part of a community and that their actions have consequences.
Consequently, this stage is much more rule-focused and comes along with a desire to be seen as good. Image, reputation, and prestige matter the most in motivating good behavior — we want to fit into our community.
Post-conventional stage. In this final stage, there is much more self-reflection and moral reasoning, which gives people the capacity to challenge authority. Committing to principles is considered more important than blindly obeying fixed laws. Importantly, a person comes to understand the difference between what is "legal" and what is "right." Ideas such as justice and fairness start to mature. Laws or rules are no longer equated to morality but might be seen as imperfect manifestations of larger principles.
A lot of moral philosophy is only possible in the post-conventional stage. Theories like utilitarianism or Immanuel Kant's duty-focused ethics ask us to consider what's right or wrong in itself, not just because we get a reward or look good to others. Aristotle perhaps sums it up best when he wrote, "I have gained this from philosophy: that I do without being commanded what others do only from fear of the law."
How morally developed are you?
Kohlberg identified these stages as a developmental progression from early infancy all the way to adulthood, and they map almost perfectly onto Jean Piaget's psychology of child development. For instance, the pre-conventional stage usually lasts from birth to roughly nine years old, the conventional occurs mainly during adolescence, and the post-conventional goes into adulthood.
What's important to note, though, is that this is not a fatalistic timetable to which all humans adhere. Kohlberg thought, for instance, that some people never progress or mature. It's quite possible, maybe, for someone to have no actual moral compass at all (which is sometimes associated with psychopathy).
More commonly, though, we all know people who are resolutely bound to the conventional stage, where they care only for their image or others' judgment. Those who do not develop beyond this stage are usually stubbornly, even aggressively, strict in following the rules or the law. Prepubescent children can be positively authoritarian when it comes to obeying the rules of a board game, for instance.
So, what's your answer to the Heinz dilemma? Where do you fall on Kohlberg's moral development scale? Is he right to view it is a progressive, hierarchical maturing, where we have "better" and "worse" stages? Or could it be that as we grow older, we grow more immoral?
Studies show that religion and spirituality are positively linked to good mental health. Our research aims to figure out how and why.
- Neurotheology is a field that unites brain science and psychology with religious belief and practices.
- There are several indirect and direct mechanisms that link spirituality with improved mental health.
- Compassion and love are positive emotions that will make your brain healthier.
The field of neurotheology continues to expand from its early origins several decades ago to the present day. In its simplest definition, neurotheology refers to the field of scholarship that seeks to understand the relationship between the brain and our religious and spiritual selves. As I always like to say, it is important to consider both sides of neurotheology very broadly. Thus, the "neuro" side includes brain imaging, psychology, neurology, medicine, and even anthropology. And the "theology" side includes theology itself, but also various aspects related to religious beliefs, attitudes, practices, and experiences.
The mental health benefits of spirituality
Neurotheology also ranges from considering very esoteric concepts including questions around free will, consciousness, and the soul, to very practical concepts such as understanding how the brain functions and the relationship between spirituality and physical and mental health. This latter topic might be called "applied neurotheology." Applied neurotheology, therefore, seeks to understand the health-related aspects pertaining to our brain and our spiritual selves. In particular, we can try to understand how being religious or spiritual, or performing various spiritual practices, might be beneficial to our overall health and well-being. In our latest book, entitled Brain Weaver, we consider this important dimension of human brain health.
Even for those who are not religious, pursuing practices such as meditation and prayer — even when secularized — can be beneficial for reducing stress and anxiety.
A growing number of studies have shown how spirituality and mental health are linked. Importantly, studies have shown that those who are religious and spiritual tend to have lower rates of depression, anxiety, and suicide. This is true across the age spectrum with studies of adolescents showing that religious and spiritual pursuits are protective against mental health problems. And many adults cite religious and spiritual beliefs as important for coping with various life stressors.
If there is a relationship between spirituality and positive mental health, we might question what the mechanism of action might be. I have typically divided the mechanisms into indirect and direct ones. The indirect mechanisms have to do with specific aspects of a given tradition that end up having ancillary mental health benefits. For example, going to church or other social events that are part of a religious tradition can be beneficial because social support, in and of itself, is beneficial to our mental health. The more people that we have in our social support network, the better we are at coping with various life stressors including problems with jobs, relationships, or health.
Most religions also teach people to avoid a lot of high-risk behaviors that can be very detrimental to our mental health and well-being. For example, most religions teach us to avoid alcohol and drugs, to not be promiscuous, and to try to be compassionate and charitable to others. By following these teachings, people will naturally avoid mental health problems such as substance abuse and tend toward being more optimistic and less depressed. These effects have nothing to do with being religious per se and everything to do with following a religion's advice.
Another interesting indirect mechanism of action related to religion has to do with diet and nutrition. Diet and nutrition are frequently overlooked when it comes to good mental health, even though research increasingly indicates they are essential. Many traditions ask individuals to follow certain dietary guidelines. For example, Hindus tend to have vegetarian diets, and most research to date shows that eating a more plant-based diet with a lot of low-inflammatory foods is good not only for your body but for your brain as well. In fact, we are currently performing a study with patients who have chronic concussion symptoms to determine the effect of dietary improvements on overall brain function.
The direct mechanisms of action have to do with specific spiritual practices and even a person's personal sense of spirituality. Much of my research over the past 30 years has been to study the brain while people engage in different practices such as meditation or prayer. We have even observed brain changes associated with unique spiritual practices such as speaking in tongues or trance states. The brain effects related to these practices are quite remarkable and diverse. It should come as no surprise since these practices affect people on many different levels, such as the way people think, feel, and experience the world around them. Thus, we should expect to observe physiological differences in the parts of the brain involved with these practices.
Meditation and prayer, for example, activate the frontal lobes as well as the language areas of the brain, and research demonstrates that this occurs not only while the practice is performed but over the long-term as well. Our study of Kirtan Kriya meditation showed improvements of about 10 to 15 percent in cognition as well as reductions in stress, anxiety, and depression. These were associated with baseline changes to the brain's frontal lobe functions, which regulate these cognitive processes and modulate emotional responses.
More recent research has been exploring the effects of these practices on larger brain networks, and perhaps more important, specific neurotransmitter systems. One of our recent studies of a spiritual retreat program showed significant changes to the areas of the brain that release dopamine and serotonin. These are areas known to be involved in both cognition and emotional health. And there are a growing number of clinical studies which have documented the value of various spiritual practices or religiously oriented therapies for helping people manage a variety of mental health conditions including depression, anxiety, and ADHD as well as neurological conditions like Alzheimer's and seizure disorders.
Finally, a personal sense of spirituality may be protective in and of itself. When people feel connected to all of humanity, a higher power, or the entire universe, that experience gives people a sense of meaning and purpose in life and an optimistic perspective on what the future holds. A number of research studies have shown that having such faith can be beneficial to your overall physical and mental health.
Improving brain health with applied neurotheology
Applied neurotheology can teach us the value of exploring our religious and spiritual side as a way of improving our mental health and well-being. Even for those who are not religious, pursuing practices such as meditation and prayer — even when secularized — can be beneficial for reducing stress and anxiety. Connecting with the larger world — by going on a nature walk, socializing with friends and family, or trying to make your neighborhood a better place by helping others — leads to a greater sense of compassion and love, positive emotions that will make your brain healthier.
Dr. Andrew Newberg is a neuroscientist who studies the relationship between brain function and various mental states. He is a pioneer in the neurological study of religious and spiritual experiences, a field known as "neurotheology." His latest book is Brain Weaver.
Many people believe that in the face of profound evil, they would have the courage to speak up. It might be harder than we think.
- After World War II, many psychologists wanted to address the question of how it was that people could go along with the evil deeds of fascist regimes.
- Solomon Asch's experiment alarmingly showed just how easily we conform and how susceptible we are to group influence.
- People often will not only sacrifice truth and reason to conformity but also their own health and sense of right and wrong.
It's the last question of the quiz, and Chloë knows the answer: it's Bolivia. Yes, it's definitely Bolivia. She went there last year, so she ought to know.
But then Shaun says it's Panama, and all the others agree with him. Chloë's sure it's Bolivia, but Shaun's so confident, the others now are nodding furiously along with him.
"What do you think, Chloë?" she's asked. She pauses for a moment.
"Yeah... Shaun's probably right. Put Panama," she mumbles.
The question of conformity
We've all been Chloë. Humans are social animals with families, tribes, and workplaces. So, it's no wonder that we try to fit in or conform. Social rejection is devastating, and we're biologically wired to avoid it. A sense of belonging and cooperation is essential to dealing with the world. Sometimes, though, this instinct can take us to ridiculous or dark places.
In the decades after World War II, politicians and academics were curious to know how it was that a country like Germany — so steeped in tradition, culture, and education — could fall into such a terrible regime within such a short time. Psychologists Stanley Milgram and Philip Zimbardo conducted experiments to answer a question many everyday people were asking: "Could it happen here?"
Would you tell a laughing group of people that a joke was sexist or racist or bigoted? In your heart of hearts, do you think that you — surely a loving and kind person — would have had the courage to resist Nazism?
While the Milgram and Zimbardo experiments are pretty famous, one of the lesser known experiments was done in the early 1950s by Solomon Asch. They demonstrated just how far humans are willing to go for the sake of "fitting in" and conforming to the rules.
The Asch experiment
Asch had his volunteers perform a simple task: they were all given a series of lines drawn on a card and asked to choose which line was longest out of three options. The right answer was laughably obvious; for instance, line A was clearly the longest. When they were alone, people chose correctly nearly every time.
Asch then put his subjects in a group with actors who had been instructed to deliberately choose the wrong answer. Under these conditions, 75 percent of subjects agreed with the group consensus at least once, even though they were blatantly wrong.
What makes us conform?
A little surprised by this, Asch went on to do a series of related experiments and documented the factors that made it more or less likely that people will "conform" with the group consensus. Here are some of them:
The difficulty of the task. When there's a higher degree of ambiguity or uncertainty about the answer (for instance, the lines in the experiment weren't so obviously different), we're more likely to agree with others.
Reliability of the source. If someone within the group seems more reliable or knowledgeable about a topic — like a doctor about a disease — then we are more likely to go along with that person's view.
Publicity. People are much more likely to conform if they have to declare their judgment publicly rather than privately.
Degree of unanimity. The presence of merely one or two dissenting voices in a group of any size greatly increases the chances that others will not conform. Even one rebellious response is enough to make others follow suit.
The implications of conformity
Of course, conformity has implications far beyond quizzes with your friends or measuring lines.
A similar but more alarming study was conducted by John Darley and Bibb Latané in the 1970s. In this study, they had subjects appear for an apparent "job interview." As the subjects were waiting, smoke was slowly pumped into the room. If people were alone, they always would check to see what was wrong, or they would get up and leave.
But when subjects were in a room with actors pretending as if nothing was wrong, the majority made no move whatsoever. This happened despite people coughing and rubbing their eyes from all the smoke. Amazingly, people were willing to risk their own health rather than break with group behavior. (No wonder many of us are hesitant to interrupt a meeting at work to open a window because it's far too hot in the room.)
What do these experiments suggest about conformity? Well, as Asch said, we learned "that intelligent and well-meaning young people are willing to call white black." He concluded that it was "concerning." Indeed.
Would you tell a laughing group of people that a joke was sexist or racist or bigoted? In your heart of hearts, do you think that you — surely a loving and kind person — would have had the courage to resist Nazism? Psychology experiments strongly suggest you would not.
Undoubtedly, there are huge evolutionary, social, and emotional benefits to conformity. Many times, it has done great good. But equally true is that conformity can also bring out the darkest and worst in us.
Could a pill make you more moral? Should you take it if it could?
- Moral enhancement is the idea that technology can be used to make us more moral people.
- Proponents argue that we need to be better people in order to solve global problems.
- Ideas on how to use this ethically abound, but no solid consensus exists yet.
People have been artificially enhancing themselves for a long time. Caffeine and other stimulants improve our cognitive performance and might have made the enlightenment possible. More controversially, some athletes use steroids to enhance their athletic performance beyond what would naturally be possible for them.
These aren't the only ways that we can use science and technology to improve our performance, of course. In the last few years, some philosophers have argued that we can, and perhaps should, use these tools to enhance our moral abilities to become a more cooperative, empathetic, or properly motivated species.
Moral enhancement explained
The term "moral enhancement" was first used in a 2008 essay by Tom Douglas. It generally refers to biomedical enhancements but can refer to any technological attempt to make humans more moral. While one could debate what "more moral" means, the literature on the subject focuses on ideas of making people more cooperative, altruistic, and the like.
I reached out to Dr. Joao Fabiano, a Visiting Fellow at Harvard University's Safra Center for Ethics, for more information. He expanded on the idea of moral enhancement and provided the motivation for it.
We all sometimes behave worse than we think we should but have a hard time improving. Moral enhancement would be a technological intervention that helps us behave as we should. There is often a certain pattern to our moral failures shared by most of us. As the neuroscience of morality progresses, we might be able to fix these failures with technology. In fact, we urgently need moral enhancement given the grave social problems these moral failures create and their ingrained biological nature...
...Many of these recurrent moral failures are connected to grave problems in society, such as our inability to tackle global threats (global warming, nuclear proliferation, and pandemics) and grave injustices. Often, these failures can be explained by evolutionary science; they are deep-seated adaptations hardwired in our brains which we can, sometimes, costly and partially control with improved social norms. For instance, many forms of group favoritism and discrimination, such as racism, are to some degree evolved adaptations to an ancestral environment where groups were small and at constant war, and long-distance trade was limited. As neuroscience continues to uncover the biological modulators of our moral behaviour, we might soon be able to reliably influence that behavior with technological interventions.
Ways to make people more moral
Several studies have demonstrated that the moral actions people take can be influenced with biomedical interventions. One found that people will be more aggressive and more likely to violate social norms when their serotonin levels are artificially lowered. Another found that increasing serotonin levels made people harm-averse and more likely to stick to ideas of fairness. Lowering the amount of tryptophan, a precursor to serotonin and melatonin, that people have in their system makes them less cooperative.
Outside of the laboratory, some commonly used drugs, such as painkillers and antidepressants, are also known to slightly modify moral decision-making. Remember that next time you try to make a decision after taking some acetaminophen. The painkiller Tylenol also kills empathy.
Dr. Fabiano points out that the widespread use of these drugs means that "technology is already interfering with our morality, sometimes in undesirable and unpredictable ways." He adds, "We should, at the very least, try to take control of that to produce desirable changes."
He also mentioned, however, that no drug that can reliably enhance moral behavior currently exists. So you shouldn't get the idea that you'll be able to enhance yourself tomorrow.
While philosophers have only been discussing this idea for the last decade or so, plenty of them have argued both for and against moral enhancement.
The basic argument for moral enhancement has been mentioned, namely, that we humans are inclined to certain moral failures, these failures can be corrected, and we have the ability to do so with technological interventions. Some thinkers, such as Julian Savulescu and Ingmar Persson, suggest that we have a moral imperative to do so, as the possibility for even a single person to cause widespread destruction is greater now than it has ever been.
On the other hand, some thinkers, like Allen Buchanan, suggest that while the problems that many proponents of moral enhancement want to solve are real, moral enhancement isn't likely to be a feasible solution to these problems.
Instead, these thinkers propose that non-medical interventions, such as adopting more progressive and accepting attitudes toward out-groups, have proven that our moral natures are not fixed and can be improved without technological intervention — even if the process is a little slow. They additionally have a few doubts about the feasibility or desirability of relying on technology to improve our morals and conclude that focusing on traditional methods is the better bet.
Of course, these are not mutually exclusive options, and it is possible that moral enhancement can be used in tandem with more traditional methods of making people more moral.
The many problems with moral enhancement
The problem of how to actually implement any technological solution remains unsolved. While some philosophers, including Dr. Fabiano, have developed frameworks to guide our use of this technology, there is no real consensus on it. This is a bit of a problem, as simplistic variations of moral enhancement, such as the use of chemical castration as a tool to try to reform sexual offenders, are already in use today in ways that are controversial.
Moral enhancement raises many other ethical questions. Which traits should be enhanced (or suppressed)? What are the side effects of taking a drug that alters your moral behavior? Should such treatments be required for some people, like violent criminals?
Ironically, there is even the chance that improving in-group cooperation, a possible excellent application of moral enhancement, could cause other problems. As Dr. Fabiano explains, "[T]here is a lot of empirical evidence indicating that a drug increasing cooperation between individuals would likely decrease cooperation between groups. Highly cooperative groups tend to be highly discriminatory. Such a drug would create more problems than it would solve."
On the other hand, the possible benefits of moral enhancement are obvious. People could become more cooperative, empathetic, or altusic without the years of work that our current moral improvement systems require. Problems we currently face could vanish in the face of an enhanced population. As Dr. Savulescu argues, this is enough of a benefit to make moral enhancement a worthwhile consideration.
If offered to you, would you take the pill?