Once a week.
Subscribe to our weekly newsletter.
Studies show that religion and spirituality are positively linked to good mental health. Our research aims to figure out how and why.
- Neurotheology is a field that unites brain science and psychology with religious belief and practices.
- There are several indirect and direct mechanisms that link spirituality with improved mental health.
- Compassion and love are positive emotions that will make your brain healthier.
The field of neurotheology continues to expand from its early origins several decades ago to the present day. In its simplest definition, neurotheology refers to the field of scholarship that seeks to understand the relationship between the brain and our religious and spiritual selves. As I always like to say, it is important to consider both sides of neurotheology very broadly. Thus, the "neuro" side includes brain imaging, psychology, neurology, medicine, and even anthropology. And the "theology" side includes theology itself, but also various aspects related to religious beliefs, attitudes, practices, and experiences.
The mental health benefits of spirituality
Neurotheology also ranges from considering very esoteric concepts including questions around free will, consciousness, and the soul, to very practical concepts such as understanding how the brain functions and the relationship between spirituality and physical and mental health. This latter topic might be called "applied neurotheology." Applied neurotheology, therefore, seeks to understand the health-related aspects pertaining to our brain and our spiritual selves. In particular, we can try to understand how being religious or spiritual, or performing various spiritual practices, might be beneficial to our overall health and well-being. In our latest book, entitled Brain Weaver, we consider this important dimension of human brain health.
Even for those who are not religious, pursuing practices such as meditation and prayer — even when secularized — can be beneficial for reducing stress and anxiety.
A growing number of studies have shown how spirituality and mental health are linked. Importantly, studies have shown that those who are religious and spiritual tend to have lower rates of depression, anxiety, and suicide. This is true across the age spectrum with studies of adolescents showing that religious and spiritual pursuits are protective against mental health problems. And many adults cite religious and spiritual beliefs as important for coping with various life stressors.
If there is a relationship between spirituality and positive mental health, we might question what the mechanism of action might be. I have typically divided the mechanisms into indirect and direct ones. The indirect mechanisms have to do with specific aspects of a given tradition that end up having ancillary mental health benefits. For example, going to church or other social events that are part of a religious tradition can be beneficial because social support, in and of itself, is beneficial to our mental health. The more people that we have in our social support network, the better we are at coping with various life stressors including problems with jobs, relationships, or health.
Most religions also teach people to avoid a lot of high-risk behaviors that can be very detrimental to our mental health and well-being. For example, most religions teach us to avoid alcohol and drugs, to not be promiscuous, and to try to be compassionate and charitable to others. By following these teachings, people will naturally avoid mental health problems such as substance abuse and tend toward being more optimistic and less depressed. These effects have nothing to do with being religious per se and everything to do with following a religion's advice.
Another interesting indirect mechanism of action related to religion has to do with diet and nutrition. Diet and nutrition are frequently overlooked when it comes to good mental health, even though research increasingly indicates they are essential. Many traditions ask individuals to follow certain dietary guidelines. For example, Hindus tend to have vegetarian diets, and most research to date shows that eating a more plant-based diet with a lot of low-inflammatory foods is good not only for your body but for your brain as well. In fact, we are currently performing a study with patients who have chronic concussion symptoms to determine the effect of dietary improvements on overall brain function.
The direct mechanisms of action have to do with specific spiritual practices and even a person's personal sense of spirituality. Much of my research over the past 30 years has been to study the brain while people engage in different practices such as meditation or prayer. We have even observed brain changes associated with unique spiritual practices such as speaking in tongues or trance states. The brain effects related to these practices are quite remarkable and diverse. It should come as no surprise since these practices affect people on many different levels, such as the way people think, feel, and experience the world around them. Thus, we should expect to observe physiological differences in the parts of the brain involved with these practices.
Meditation and prayer, for example, activate the frontal lobes as well as the language areas of the brain, and research demonstrates that this occurs not only while the practice is performed but over the long-term as well. Our study of Kirtan Kriya meditation showed improvements of about 10 to 15 percent in cognition as well as reductions in stress, anxiety, and depression. These were associated with baseline changes to the brain's frontal lobe functions, which regulate these cognitive processes and modulate emotional responses.
More recent research has been exploring the effects of these practices on larger brain networks, and perhaps more important, specific neurotransmitter systems. One of our recent studies of a spiritual retreat program showed significant changes to the areas of the brain that release dopamine and serotonin. These are areas known to be involved in both cognition and emotional health. And there are a growing number of clinical studies which have documented the value of various spiritual practices or religiously oriented therapies for helping people manage a variety of mental health conditions including depression, anxiety, and ADHD as well as neurological conditions like Alzheimer's and seizure disorders.
Finally, a personal sense of spirituality may be protective in and of itself. When people feel connected to all of humanity, a higher power, or the entire universe, that experience gives people a sense of meaning and purpose in life and an optimistic perspective on what the future holds. A number of research studies have shown that having such faith can be beneficial to your overall physical and mental health.
Improving brain health with applied neurotheology
Applied neurotheology can teach us the value of exploring our religious and spiritual side as a way of improving our mental health and well-being. Even for those who are not religious, pursuing practices such as meditation and prayer — even when secularized — can be beneficial for reducing stress and anxiety. Connecting with the larger world — by going on a nature walk, socializing with friends and family, or trying to make your neighborhood a better place by helping others — leads to a greater sense of compassion and love, positive emotions that will make your brain healthier.
Dr. Andrew Newberg is a neuroscientist who studies the relationship between brain function and various mental states. He is a pioneer in the neurological study of religious and spiritual experiences, a field known as "neurotheology." His latest book is Brain Weaver.
Regularities, which we associate with laws of nature, require an explanation.
- The nomological argument for the existence of God comes from the Greek nomos or "law," because it's based on the laws of nature.
- There are pragmatic, aesthetic, and moral reasons for regularities to exist in nature.
- The best explanation may be the existence of a personal God rather than mindless laws or chance.
Here's a new version of an old argument for the existence of God. It's called the "nomological argument," after the Greek nomos or "law," because it's based on laws of nature.
Suppose that you receive five consecutive royal flushes in a game of poker. What explains this? You could have received them by chance, but that seems unlikely. A better explanation is that someone has arranged the decks in your favor.
Similarly, we can ask for an explanation of why nature is full of regularities, such as that planets have elliptical orbits and that oppositely charged particles attract. As with your sequence of hands, these regularities could be the result of chance, but that seems unlikely. A better explanation is that something is responsible for them. But what?
To clarify, we're not asking why we have the specific regularities that we do in fact have. Thus, we're not asking why the laws of nature appear to be fine-tuned to support life: for example, that gravity is the correct strength to permit the formation of stars. We think that's an interesting question but not our present topic. (See our "Further Reading" section below if you want to learn more.) Similarly, we're not talking about "intelligent design"; we're not asking why well-adapted species exist today. We think that can be adequately explained by citing regularities of natural selection and genetics. Our question is more general: Why are there any regularities at all, as opposed to irregularities?
Regularities: The nomological argument for the existence of God
Credit: NASA / JPL-Caltech / Space Science Institute
According to the nomological argument, the best explanation of regularities involves a supernatural personal being, God. It's not necessary for God to have all the attributes of a theistic or Biblical god — namely, omnipotence, omniscience, and moral perfection — but only that God is an intelligent being with the power to control whether nature exhibits regularities. In other words, this argument holds that regularities in nature are analogous to your winning poker hands.
To begin, why does the best explanation of your sequence of royal flushes involve a person? Well, we can think of pragmatic, aesthetic, and even moral reasons why a person might want to impose order on decks of cards. A pragmatic reason is about self-interest: someone might impose order on the deck of cards because they want you to win some money. An aesthetic reason is about elegance or beauty: royal flushes might just look nice. And maybe a moral reason could be that you deserve to win.
Similarly, we can think of pragmatic, aesthetic, and even moral reasons why God might want to impose regularities on nature: notably, most of the valuable things we know of (such as happiness, love, rationality, knowledge, or meaningfully free choices) cannot be realized in worlds without regularities. And since God is a person, we have reason to think that God might have moral and aesthetic preferences. Indeed, this would be so even if God were evil or had poor taste, since almost any moral and aesthetic states of affairs require some degree of regularity. As a result, if you knew that a personal being was about to create a world, you wouldn't be unreasonable in anticipating regularities, even if you knew nothing else about that being.
Objections and further development
At this point, someone might object as follows: Do we really need to invoke God? Doesn't Occam's Razor say we should prefer a simpler explanation or not posit this extra, unnecessary thing? Well, positing God doesn't really commit us to much more than other explanations of regularity would; they too would posit additional entities.
For example, suppose we try to posit laws of nature to explain regularities instead of God. We all have some idea of what a law of nature is supposed to be: Newton's laws of motion, the law that nothing can travel faster than the speed of light, or the ideal gas laws. Scientists posit laws such as these to explain things all the time. However, scientists typically assume that there are regularities, and they try to determine which ones are the most significant, important, or fundamental. When they've found one, they call it a "law of nature." In their role as scientists, they don't try to explain why there are fundamental laws of nature in the first place. So if we want to explain why there are regularities as opposed to irregularities — indeed, if we want to explain why science is possible at all — we have to do some philosophy. If we were going to explain regularities by positing laws, we'd first have to say what a law is.
This appeal to God has some important explanatory virtues and that, as a result, it deserves serious consideration as an explanation of why there are regularities.
There are philosophical accounts of laws that do not involve God, but those that attempt to explain regularities all do so by positing extra entities, too. These involve exotic things such as Platonic universals, Aristotelian natural kind essences, or other sorts of primitive necessities. As far as Occam's Razor is concerned, that's no better than positing God.
Moreover, these competing theories face a different problem. Positing mindless laws of nature with no ultimate explanation just seems to push the problem back. Now we have yet another interesting phenomenon to explain. Why did the laws that just randomly happened to exist generate regularities, which are only a relatively tiny portion of the possible set of events? To return to our analogy, it wouldn't be satisfying to say that you got five royal flushes in a row because some mindless law just happened to guarantee that result. (Why wasn't there a different law, one that generated any one of the octillions of other possible sequences instead? Just a huge coincidence?) In any case, we say a lot more in our journal article about why other explanations, such as alternative philosophical accounts of the nature of laws, don't do a great job of explaining regularities.
One might worry that positing God pushes the problem back in exactly the same way: What explains the existence of God? Well, everyone has to posit something, and we can always ask for an explanation of those things. Because positing God is relatively modest, we think it's more or less on the same footing as positing anything else — maybe no philosophical theory can really explain its fundamental entities. However, positing God answers a difficult question that other accounts don't: namely, why are there regularities as opposed to irregularities? To posit nothing, or pure, random chance, is modest but doesn't do a good job of explaining: random chance doesn't explain the five royal flushes. To posit some mindless explanation that just happened, coincidentally, to give us something as complex and consistent as a regularity does a good job of explaining but isn't really modest: your poker opponent would be very skeptical if you posited something as complex and coincidental as that as an explanation of your five royal flushes. (For those familiar with Bayesian reasoning, we're arguing that "God" strikes the best balance between prior probability of the explanation and likelihood of the phenomenon to be explained.) As a result, it doesn't merely push back the specific problem that concerns us.
Another objection might be that we've just posited a "God of the gaps" — simply positing God ad hoc when there's some gap in our knowledge. However, we haven't argued, "We don't know why laws of nature exist, and therefore, God did it." Instead, we've argued as follows: We know why God would create regularities, but we don't know why random chance or some mindless law would. And recall, the version of God we've described — simply a person with the power to control whether there are regularities — is relatively modest. Therefore, God provides a pretty good explanation of these regularities.
We'll mention one last objection. Proponents of a multiverse might say that regularity isn't surprising, because the probability that at least one universe exhibits regularity is high. Some proponents of a multiverse are motivated by scientific considerations. However, since the relevant scientific theories (inflation, string theory, many-worlds interpretations of quantum mechanics) posit underlying regularities that generate and maintain the multiverse, we can simply ask what explains those regularities. Other proponents of a multiverse are motivated by philosophical considerations — for example, that we should posit a plurality of possible worlds to make sense of our concepts of possibility and necessity. This might be a good reason to posit possible worlds, but it doesn't really explain regularities in our world. After all, you wouldn't find your sequence of royal flushes any less surprising upon learning that poker is a very popular game.
Philosophy is hard
One last disclaimer: Philosophy can be really hard. We don't claim to provide a proof, or even an especially strong argument, for the existence of God. Instead, we merely claim that this appeal to God has some important explanatory virtues and that, as a result, it deserves serious consideration as an explanation of why there are regularities.
Though modest, this conclusion is noteworthy. As we alluded to above, scientific practice requires regularities. By providing a philosophical explanation of regularities, we are trying to explain why science is possible in the first place. Relatedly, many Early Modern philosophers thought that scientific investigation of the natural world allowed us insight into the mind of God. If God's relation to the laws of nature might be as we've suggested, theists should have a very positive attitude towards the sciences. Likewise, those who prefer naturalistic or atheistic accounts should at least be open-minded about the relationship between science and religion. This is not a new lesson, but it provides a further illustration of the fact that, while there may be no role for God or other supernatural entities in scientific explanations, this does not mean that science itself is necessarily at odds with religious belief.
Suggestions for further reading
The journal article on which this essay is based is:
Tyler Hildebrand and Thomas Metcalf, "The Nomological Argument for the Existence of God." Noûs. DOI 10.1111/nous.12364 (available on EarlyView)
For a book length defense of a divine explanation of regularities, see:
John Foster, The Divine Lawmaker. Oxford University Press, 2004
For an introduction to the metaphysics of laws of nature, see:
Tyler Hildebrand, "Non-Humean Theories of Natural Necessity." Philosophy Compass 15, 2020
For more on multiverse-style objections to design arguments, see:
Thomas Metcalf, "On Friederich's New Fine-Tuning Argument," Foundations of Physics 51, 2021
Thomas Metcalf, "Fine-Tuning the Multiverse," Faith and Philosophy 35, 2018
For readers interested in the role of God philosophical accounts of laws in the Early Modern period, see:
Ott & Patton's Laws of Nature (Oxford University Press, 2018)
Ott's Causation and Laws of Nature in Early Modern Philosophy (Oxford University Press, 2009)
For introductory essays aimed at relative beginners, see:
Thomas Metcalf, "Design Arguments for the Existence of God," in 1000-Word Philosophy: https://1000wordphilosophy.com/2018/02/28/design-a...
Thomas Metcalf, "Philosophy and its Contrast with Science," in 1000-Word Philosophy: https://1000wordphilosophy.com/2018/02/13/philosop...
Michael Zerella, "Laws of Nature," in 1000-Word Philosophy: https://1000wordphilosophy.com/2014/02/17/laws-of-...
In the near-term, gene editing is not likely to be useful. Even in the long-term, it may not be very practical.
- Once perfected, gene editing is likely to be useful only under limited conditions.
- Multigenic diseases like schizophrenia and cardiovascular disease are probably too complicated to be fixed by gene editing.
- Embryo screening is a far more effective way to achieve the same objective.
The following is an adapted excerpt from the new book CRISPR People, reprinted with permission of the author.
I see no inherent or unmanageable ethical barriers to human germline genome editing. On the other hand, I see very few good uses for it. That is mainly because other technologies can attain almost all the important hoped-for benefits of human germline genome editing, often with lower risk. Two such technologies are particularly noteworthy: embryo selection and somatic genome editing.
Gene editing vs. embryo selection
The most obvious potential benefit would be to edit embryos, or the eggs and sperm used to make embryos, to avoid the births of children whose genetic variations would give them a certainty or high risk of a specific genetic disease. And here it is time to explain the ways genetic diseases or other traits get inherited. If the disease or trait depends on just one gene, we call it a Mendelian condition or trait, named after Gregor Mendel, the Austrian monk who first discovered this kind of inheritance. If more than one gene is involved, we cleverly call them non-Mendelian conditions or traits. Most of the discussion below is about Mendelian conditions for the simple reason that there is more to say about them.
Mendelian conditions can largely be put into five main categories, depending on where the relevant DNA is found and how many copies of the disease-causing variant are needed to lead to the disease: autosomal dominant, autosomal recessive, X-linked, Y-linked, or mitochondrial. Autosomal dominant diseases require only one copy of the disease-causing genetic variation; autosomal recessive diseases require two copies, one from each parent. X-linked diseases typically require two copies in women (one from each parent) but only one in men (who have only one X chromosome, always inherited from the mother). Y-linked diseases, which are unusual, are found only in men and require only one copy — because only men have a Y chromosome and normally they have only one copy of it. Mitochondrial diseases are inherited only from the mother and any mother with the disease will necessarily pass it on to all her children.
Why take the new, riskier — and, to many people, disconcerting — path of gene editing rather than just selecting embryos?
So, if an embryo has 47 CAG repeats in the relevant region of its huntingtin gene, it is doomed (if born) to have autosomal dominant Huntington's disease. One might use germline editing to reduce those 47 repeats to a safe number, of under 37, and thus prevent the disease. Or if an embryo has two copies of the genetic variation for the autosomal recessive Tay-Sachs disease, it could be edited so that the embryo had one or no copies and would be safe. The same is true of X-linked, Y-linked, or mitochondrial diseases.
If this is safe and effective, it may make sense. But another technology that has been in clinical practice for about 30 years is known to be (relatively) safe and effective and can do the same thing — PGD [preimplantation genetic diagnosis]. PGD involves taking one or a few cells from an ex vivo embryo, testing the DNA in those cells, and using the results to determine whether or not to transfer that particular embryo to a woman's uterus for possible implantation, pregnancy, and birth. The first PGD baby was born in 1990. In 2016, the last year for which data are available, the U.S. Centers for Disease Control and Prevention (CDC) reported that about 22 percent of the roughly 260,000 IVF cycles performed that year in the United States involved PGD (or a version called preimplantation genetic screening, or PGS). That was up from about 5 percent the year before. Anecdotally, from conversations with people working in IVF clinics, it sounds as though PGD or PGS usage in 2019 may well be above 50 percent, at least in some areas of the United States.
If a couple wants to avoid having a child with a nasty Mendelian genetic disease or condition, they could, in a decade or more, use CRISPR or other gene-editing tools to change an embryo's variants into a safer form or, today, they could use PGD to find out which embryos carry, or do not carry, the dangerous variants. For an autosomal recessive condition, on average 25 percent of the embryos will be affected; for an autosomal dominant one, 50 percent will be. Even for dominant conditions, if one looks at 10 embryos, the chance that all 10 will have the "bad" version is one in 1,024. If you have 20 embryos to examine, it becomes one in 1,048,576.
So, why take the new, riskier — and, to many people, disconcerting — path of gene editing rather than just selecting embryos?
Gene editing in somatic cells vs. germline cells
Somatic cell therapy does not change the germline, and it comprises a technology much closer to being shown safe and effective than human germline genome editing. Arguably, the fact that the change is only being made in one or a few of the many tissues of the body would improve its safety over a change that exists in every cell, including cells where a particular off-target change has harmful effects.
On the other hand, genome editing of an egg, a sperm, or a zygote needs to change only one cell. This might prove more effective than changing, say, 100 million blood-forming stem cells or several billion lung cells. Furthermore, somatic cell editing would not necessarily work for all conditions. For some, too many different cells or tissues may have to be targeted. For others, the damage may begin before birth, or even before the stage of fetal development where in utero somatic editing becomes plausible. For diseases with very early consequential effects, somatic cell therapy may be inferior to embryo editing or embryo selection.
Even when somatic editing is possible, human germline genome editing retains one advantage: the process would not have to be repeated in the next generation. If somatic editing is used, that person would still have eggs or sperm that could pass on the disease. If she or he wanted to avoid a sick child, PGD or somatic cell gene therapy might be necessary. If germline editing is used, that child's children will be free from the risk of inheriting the disease from their edited parents. But is this a bug or a feature? It adds a choice — not a choice for the embryo that is, or isn't, edited but for the parents of that embryo. Somatic cell editing continues the possibility of a disease in the next generation — but allows that generation's parents to make the decision. One might — or might not — see that as a benefit.
Gene editing in multigenic diseases
In non-Mendelian (sometimes called multigenic) diseases, no one variant plays a powerful role in causing the disease. Variations in two, or twenty, or two hundred genes may influence the condition. Collectively, those influences might be 100 percent, though the cases we know now add up to much lower certainties. We do not yet know of many good examples, though at least one paper claims to have found strong evidence of that variations of different genes, working together, increase the risk for some cases of autism. And, more generally, we know of many combinations of shared genomic regions that (slightly) increase or lower the risk for various diseases or traits in particular, studied populations. (These have led to the hot area of "polygenic risk scores," whose ultimate significance remains to be seen.)
The biggest problem with human germline genome editing for non-Mendelian conditions is that we do not know nearly enough about the conditions. We believe that many conditions are non-Mendelian, but how many genes are involved? Which genomic variations add or subtract risk? How do the effects of variations from different genes combine to create risks? In a simple world, they would be additive: if having a particular variation of one gene increases a person's risk of a disease by 10 percentage points and having a particular variation of a different gene increases that person's risk by 5 percentage points, then having both would increase the risk by 15 percent. But there is no inherent reason nature has to work that way; the combined effects may be greater or less than their sum. It is even conceivable that having two variations that each, individually, raise a person's risk might somehow lower the overall risk. We know almost nothing about the structure of these non-Mendelian, or multigenic, risks.
It is clear, though, that, in general, PGD would be much less useful for non-Mendelian diseases than for Mendelian ones. The chances of finding an embryo with "the right" set of genetic variations at five different spots along the genome will be much smaller than of finding an embryo with just one "right" variation. If the odds for any one variation are 50/50, the overall odds for any five variations in one embryo are one in 32. If gene editing could safely and effectively edit five places in an embryo's genome (or in two gametes' genomes), it could deliver the preferred outcome. On the other hand, if we can use genome editing to do that in an embryo or gamete, we may well be able to do the same in a fetus, a baby, a child, or an adult through somatic cell gene therapy — unless the condition begins to cause harm early in development, or broadly enough in the body that it needs to be delivered to all the body's cells.
Is gene editing practical?
Right now, there is no non-Mendelian condition for which we are confident we know the exact set of genes involved. Neither do we know the negative and positive effects of different combinations of genetic variants. Until these uncertainties are adequately resolved, human germline genome editing, though in theory better than PGD, will not be safe or effective enough for use. Once they are resolved, in many situations it will be no better than somatic cell genome editing, except for the possible absence of needing to hit targets in multiple tissues or cell types and the absence of a need to repeat the editing for the next generation.
Adapted from CRISPR PEOPLE: The Science and Ethics of Editing Humans by Henry Greely. Copyright 2021. Reprinted with Permission from The MIT PRESS.
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
Because of our ability to think about thinking, "the gap between ape and man is immeasurably greater than the one between amoeba and ape."
- Self-awareness — namely, our capacity to think about our thoughts — is central to how we perceive the world.
- Without self-awareness, education, literature, and other human endeavors would not be possible.
- Striving toward greater self-awareness is the spiritual goal of many religions and philosophies.
The following is an excerpt from Dr. Stephen Fleming's forthcoming book Know Thyself. It is reprinted with permission from the author.
I now run a neuroscience lab dedicated to the study of self-awareness at University College London. My team is one of several working within the Wellcome Centre for Human Neuroimaging, located in an elegant town house in Queen Square in London. The basement of our building houses large machines for brain imaging, and each group in the Centre uses this technology to study how different aspects of the mind and brain work: how we see, hear, remember, speak, make decisions, and so on. The students and postdocs in my lab focus on the brain's capacity for self-awareness. I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
Until quite recently, however, this all seemed like nonsense. As the nineteenth-century French philosopher Auguste Comte put it: "The thinking individual cannot cut himself in two — one of the parts reasoning, while the other is looking on. Since in this case the organ observed and the observing organ are identical, how could any observation be made?" In other words, how can the same brain turn its thoughts upon itself?
Comte's argument chimed with scientific thinking at the time. After the Enlightenment dawned on Europe, an increasingly popular view was that self-awareness was special and not something that could be studied using the tools of science. Western philosophers were instead using self-reflection as a philosophical tool, much as mathematicians use algebra in the pursuit of new mathematical truths. René Descartes relied on self-reflection in this way to reach his famous conclusion, "I think, therefore I am," noting along the way that "I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind." Descartes proposed that a central soul was the seat of thought and reason, commanding our bodies to act on our behalf. The soul could not be split in two — it just was. Self-awareness was therefore mysterious and indefinable, and off-limits to science.
We now know that the premise of Comte's worry is false. The human brain is not a single, indivisible organ. Instead, the brain is made up of billions of small components — neurons — that each crackle with electrical activity and participate in a wiring diagram of mind-boggling complexity. Out of the interactions among these cells, our entire mental life — our thoughts and feelings, hopes and dreams — flickers in and out of existence. But rather than being a meaningless tangle of connections with no discernible structure, this wiring diagram also has a broader architecture that divides the brain into distinct regions, each engaged in specialized computations. Just as a map of a city need not include individual houses to be useful, we can obtain a rough overview of how different areas of the human brain are working together at the scale of regions rather than individual brain cells. Some areas of the cortex are closer to the inputs (such as the eyes) and others are further up the processing chain. For instance, some regions are primarily involved in seeing (the visual cortex, at the back of the brain), others in processing sounds (the auditory cortex), while others are involved in storing and retrieving memories (such as the hippocampus).
In a reply to Comte in 1865, the British philosopher John Stuart Mill anticipated the idea that self-awareness might also depend on the interaction of processes operating within a single brain and was thus a legitimate target of scientific study. Now, thanks to the advent of powerful brain imaging technologies such as functional magnetic resonance imaging (fMRI), we know that when we self-reflect, particular brain networks indeed crackle into life and that damage or disease to these same networks can lead to devastating impairments of self-awareness.
I often think that if we were not so thoroughly familiar with our own capacity for self-awareness, we would be gobsmacked that the brain is able to pull off this marvelous conjuring trick. Imagine for a moment that you are a scientist on a mission to study new life-forms found on a distant planet. Biologists back on Earth are clamoring to know what they're made of and what makes them tick. But no one suggests just asking them! And yet a Martian landing on Earth, after learning a bit of English or Spanish or French, could do just that. The Martians might be stunned to find that we can already tell them something about what it is like to remember, dream, laugh, cry, or feel elated or regretful — all by virtue of being self-aware.
I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
But self-awareness did not just evolve to allow us to tell each other (and potential Martian visitors) about our thoughts and feelings. Instead, being self-aware is central to how we experience the world. We not only perceive our surroundings; we can also reflect on the beauty of a sunset, wonder whether our vision is blurred, and ask whether our senses are being fooled by illusions or magic tricks. We not only make decisions about whether to take a new job or whom to marry; we can also reflect on whether we made a good or bad choice. We not only recall childhood memories; we can also question whether these memories might be mistaken.
Self-awareness also enables us to understand that other people have minds like ours. Being self-aware allows me to ask, "How does this seem to me?" and, equally importantly, "How will this seem to someone else?" Literary novels would become meaningless if we lost the ability to think about the minds of others and compare their experiences to our own. Without self-awareness, there would be no organized education. We would not know who needs to learn or whether we have the capacity to teach them. The writer Vladimir Nabokov elegantly captured this idea that self-awareness is a catalyst for human flourishing:
"Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follow s— the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
In light of these myriad benefits, it's not surprising that cultivating accurate self-awareness has long been considered a wise and noble goal. In Plato's dialogue Charmides, Socrates has just returned from fighting in the Peloponnesian War. On his way home, he asks a local boy, Charmides, if he has worked out the meaning of sophrosyne — the Greek word for temperance or moderation, and the essence of a life well lived. After a long debate, the boy's cousin Critias suggests that the key to sophrosyne is simple: self-awareness. Socrates sums up his argument: "Then the wise or temperate man, and he only, will know himself, and be able to examine what he knows or does not know…No other person will be able to do this."
Likewise, the ancient Greeks were urged to "know thyself" by a prominent inscription carved into the stone of the Temple of Delphi. For them, self-awareness was a work in progress and something to be striven toward. This view persisted into medieval religious traditions: for instance, the Italian priest and philosopher Saint Thomas Aquinas suggested that while God knows Himself by default, we need to put in time and effort to know our own minds. Aquinas and his monks spent long hours engaged in silent contemplation. They believed that only by participating in concerted self-reflection could they ascend toward the image of God.
A similar notion of striving toward self-awareness is seen in Eastern traditions such as Buddhism. The spiritual goal of enlightenment is to dissolve the ego, allowing more transparent and direct knowledge of our minds acting in the here and now. The founder of Chinese Taoism, Lao Tzu, captured this idea that gaining self-awareness is one of the highest pursuits when he wrote, "To know that one does not know is best; Not to know but to believe that one knows is a disease."
Today, there is a plethora of websites, blogs, and self-help books that encourage us to "find ourselves" and become more self-aware. The sentiment is well meant. But while we are often urged to have better self-awareness, little attention is paid to how self-awareness actually works. I find this odd. It would be strange to encourage people to fix their cars without knowing how the engine worked, or to go to the gym without knowing which muscles to exercise. This book aims to fill this gap. I don't pretend to give pithy advice or quotes to put on a poster. Instead, I aim to provide a guide to the building blocks of self-awareness, drawing on the latest research from psychology, computer science, and neuroscience. By understanding how self-awareness works, I aim to put us in a position to answer the Athenian call to use it better.
Modern science progresses with an intensity and even irrationality that Aristotle could not fathom.
- Modern science requires scrutinizing the tiniest of details and an almost irrational dedication to empirical observation.
- Many scientists believe that theories should be "beautiful," but such argumentation is forbidden in modern science.
- Neglecting beauty would be a step too far for Aristotle.
Modern science has done astounding things: sending probes to Pluto, discerning the nature of light, vaccinating the globe. Its power to plumb the world's inner workings, many scientists and philosophers of science would say, hinges on its exacting attention to empirical evidence. The ethos guiding scientific inquiry might be formulated so: "Credit must be given to theories only if what they affirm agrees with the observed facts."
Those are the words of the Greek philosopher Aristotle, writing in the fourth century BCE. Why, then, was it only during the Scientific Revolution of the 17th century, two thousand years later, that science came into its own? Why wasn't it Aristotle who invented modern science?
The answer is, first, that modern science attends to a different kind of observable fact than the sort that guided Aristotle. Second, modern science attends with an intensity — indeed an unreasonable narrow-mindedness — that Aristotle would have found to be more than a little unhinged. Let's explore those two ideas in turn.
In 1915, Albert Einstein proposed a new theory of gravitation — the general theory of relativity. It told a story radically different from the prevailing Newtonian theory; gravity, according to Einstein, was not a force but rather the manifestation of matter's propensity to travel along the straightest possible path through twisted spacetime. Relativity revised the notion of gravitation on the grandest conceptual scale, but to test it required the scrutiny of minutiae.
Einstein's general relativity predicts gravitational lensing.Credit: NASA, ESA, and STScI / Public Domain via Wikipedia
When Arthur Eddington sought experimental evidence for the theory by measuring gravity's propensity to bend starlight, he photographed the same star field both in the night sky and then in close proximity to the eclipsed sun, looking for a slight displacement in the positions of the stars that would reveal the degree to which the sun's mass deflected their light. The change in position was on the order of a fraction of a millimeter on his photographic plates. In that minuscule discrepancy lay the reason to accept a wholly new vision of the nature of the forces that shape galaxies.
Aristotle would not have thought to look in these places, at these diminutive magnitudes. Even the pre-scientific thinkers who believed that the behavior of things was determined by their microscopic structure did not believe it was possible for humans to discern that structure. When they sought a match between their ideas and the observed facts, they meant the facts that any person might readily encounter in the world around them: the gross motions of cannonballs and comets; the overall attunement of animals and their environs; the tastes, smells, and sounds that force themselves on our sensibilities without asking our permission. They were looking in the wrong place. The clues to the deepest truths have turned out to be deeply hidden.
Modern science attends with an intensity — indeed an unreasonable narrow-mindedness — that Aristotle would have found to be more than a little unhinged.
Even in those cases where the telling evidence is visible to the unassisted eye, the effort required to gather what's needed can be monumental. Charles Darwin spent nearly five years sailing around the world on a 90-foot-long ship, the Beagle, recording the sights and sounds that would prompt his theory of evolution by natural selection. Following his famous footsteps, the Princeton biologists Rosemary and Peter Grant have spent nearly 50 years visiting the tiny Galápagos island of Daphne Major every summer observing the local finch populations. In so doing, they witnessed the creation of a new species.
Similarly excruciating demands are made by many other scientific projects, each consumed with the hunt for subtle detail. The LIGO experiment to measure gravitational waves commenced in the 1970s, was nearly closed down in the 1980s, began operating its detectors only in 2002, and then for well over a decade found nothing. Upgraded machinery revealed the waves at last in 2015. The scientists who had spent their entire careers working on LIGO were by then retired from their long-time university positions.
The "iron rule" of modern science
What pushes scientists to undertake these titanic efforts? That question brings me to the second way in which modern science's attitude to evidence differs from Aristotle's. There is something about the institutions of science, as the philosopher and historian Thomas Kuhn wrote, that "forces scientists to investigate some part of nature in a detail and depth that would otherwise be unimaginable". That something is an "iron rule" to the effect that, when publishing arguments for or against a hypothesis, only empirical evidence counts. That is to say, the only kind of argument that is allowed in science's official organs of communication is one that assesses a theory according to its ability to predict or explain the observable facts.
Aristotle and Alexander the GreatCredit: Charles Laplante / Public Domain via Wikipedia
Aristotle said that evidence counts, but he did not say that only evidence counts. To get a feel for the significance of this additional word, one of modern science's most significant ingredients, let me return to Eddington's attempt to test Einstein's theory by photographing stars during a solar eclipse.
Eddington was himself as much of a theoretical as an experimental physicist. He was struck by the mathematical beauty of Einstein's theory, which he took as a sign of its superiority to the old, Newtonian physics. He might have devoted himself to promoting relativity theory on these grounds, proselytizing its aesthetic merits with his elegant writing style and his many scientific connections. But in scientific argument, only empirical evidence counts. To appeal to a theory's beauty is to transgress this iron rule.
If Eddington was to advocate for Einstein, he would have to do so with measurements. Consequently, he found himself on a months-long expedition to Africa, where he and his collaborators sweated over their equipment day after day while praying for clear skies. In short, the iron rule forced Eddington to put beauty aside and to get on the boat. That is how scientists are pushed to hunt down the fine-grained, often elusive observations that endow science with its extraordinary power.
Irrational but effective
Though it may be a resounding success, there is something very peculiar about the iron rule. For Eddington and many other physicists, beauty is an important, even a crucial, consideration in determining the truth: "We would not accept any theory as final unless it were beautiful," wrote the Nobelist Steven Weinberg.
At the same time, the iron rule stipulates that beauty may play no part in scientific argument, or at least, in official, written scientific argument. The rule tells scientists, then, to ignore what they take to be an immensely valuable criterion for assessing theories. That seems oddly, even irrationally, narrow-minded. It turns out, then, that science's knowledge-making prowess is owed in great part to a kind of deliberate blindness, an unreasonable insistence that inquirers into nature consider nothing but observed fact.
Michael Strevens writes about science, understanding, complexity, and the nature of thought, and teaches philosophy at New York University. His most recent book, The Knowledge Machine (Liveright, 2020), sets out to explain how science works so well and why it took so long to get it right.