The fatal flaw lurking in American leftist politics
What is liberal America's big, and possibly fatal, mistake? Failing to recognize its own extremists.
Jordan B. Peterson, raised and toughened in the frigid wastelands of Northern Alberta, has flown a hammer-head roll in a carbon-fiber stunt-plane, explored an Arizona meteorite crater with astronauts, and built a Kwagu'l ceremonial bighouse on the upper floor of his Toronto home after being invited into and named by that Canadian First Nation. He's taught mythology to lawyers, doctors and business people, consulted for the UN Secretary General, helped his clinical clients manage depression, obsessive-compulsive disorder, anxiety, and schizophrenia, served as an adviser to senior partners of major Canadian law firms, and lectured extensively in North America and Europe. With his students and colleagues at Harvard and the University of Toronto, Dr. Peterson has published over a hundred scientific papers, transforming the modern understanding of personality, while his book Maps of Meaning: The Architecture of Belief revolutionized the psychology of religion. His latest book is 12 Rules for Life: An Antidote to Chaos.
Jordan Peterson: I would like to talk briefly about depolarization on the Left and the Right, because I think there’s a technical problem that needs to be addressed. So here’s what I’ve been thinking about.
It’s been obvious to me for some time that, for some reason, the fundamental claim of post-modernism is something like an infinite number of interpretations and no canonical overarching narrative. Okay, but the problem with that is: okay, now what?
No narrative, no value structure that is canonically overarching, so what the hell are you going to do with yourself? How are you going to orient yourself in the world? Well, the post-modernists have no answer to that. So what happens is they default—without any real attempt to grapple with the cognitive dissonance—they default to this kind of loose, egalitarian Marxism. And if they were concerned with coherence that would be a problem, but since they’re not concerned with coherence it doesn’t seem to be a problem.
But the force that’s driving the activism is mostly the Marxism rather than the post-modernism. It’s more like an intellectual gloss to hide the fact that a discredited economic theory is being used to fuel an educational movement and to produce activists. But there’s no coherence to it.
It’s not like I’m making this up, you know. Derrida himself regarded—and Foucault as well—they were barely repentant Marxists. They were part of the student revolutions in France in the 1960s, and what happened to them, essentially—and what happened to Jean-Paul Sartre for that matter—was that by the end of the 1960s you couldn’t be conscious and thinking and pro-Marxist. There’s so much evidence that had come pouring in from the former Soviet Union, from the Soviet Union at that point, and from Maoist China, of the absolutely devastating consequences of the doctrine that it was impossible to be apologetic for it by that point in time.
So the French intellectuals in particular just pulled off a sleight of hand and transformed Marxism into post-modern identity politics. And we’ve seen the consequence of that. It’s not good. It’s a devolution into a kind of tribalism that will tear us apart on the Left and on the Right.
In my house, I have a very large collection of socialist, realist paintings from the former Soviet Union—propaganda pieces, but also kind of harsh impressionist pieces of working-class people and so forth—and I collected them for a variety of reasons. Now you could debate about the propriety of that given the murderousness of those regimes. And fair enough, I have my reasons. But I don’t have paintings from the Nazi era in my house, and I wouldn’t. And that’s been a puzzlement to me because I regard the communists, the totalitarian communist regimes, as just as murderous as the Nazi regimes.
But there’s an evil associated with the Nazi regime that seems more palpable in some sense. So I’ve been thinking about that for a long time. And then I’ve been thinking about a corollary to that, which is part of the problem with our current political debate.
On the Right, I think we’ve identified markers for people who have gone too far in their ideological presuppositions. And it looks to me like the marker we’ve identified is racial superiority. I think we’ve known that probably since the end of World War II, but we saw a pretty good example of it in the 1960s with William Buckley, because Buckley, when he put out his conservative magazine, the David Duke types kind of attached themselves to it, and he said, “No, here’s the boundary. You guys are on the wrong side of the boundary. I’m not with you.” And Ben Shapiro recently did this, for example, as well in the aftermath of the Charlottesville incident.
So what’s interesting is that on the conservative side of the spectrum we’ve figured out how to box-in the radicals and say, “No, you’re outside the domain of acceptable opinion.”
Now here’s the issue: We know that things can go too far on the Right and we know that things can go too far on the Left. But we don’t know what the markers are for going too far on the Left. And I would say that it’s ethically incumbent on those who are liberal or Left-leaning to identify the markers of pathological extremism on the Left and to distinguish themselves from the people who hold those pathological viewpoints. And I don’t see that that’s being done. And I think that’s a colossal ethical failure, and it may doom the liberal-Left project.
The Lefties have their point. They’re driven fundamentally by a horror of inequality and the catastrophes that inequality produces—and fair enough, because inequality is a massive social force and it does produce, it can produce, catastrophic consequences. So to be concerned about that politically is reasonable. But we do know that that concern can go too far. So I’ve suggested that there’s a triumvirate of concepts that have the same potentially catastrophic outcomes when implemented as the racial superiority doctrines. Diversity, inclusivity, and equity as a triumvirate—even though you could have an intelligent conversation about two of those anyways. But I would say that of the three, equity is the most unacceptable. The doctrine of equality of outcome. And it seems to me that that’s where people who are thoughtful on the Left should draw the line, and say, “No. Equality of opportunity? Not only fair enough, but laudable. But equality of outcome…?” it’s like, “No, you’ve crossed the line. We’re not going there with you.”
Now maybe that’s wrong. Maybe it’s not equity. That’s my candidate for it. But it is definitely the case that you can go too far on the Left and it’s definitely the case that we don’t know where to draw the line. And that’s a big problem.
An example of equality of outcome are attempts being made now to implement the legislative necessity to eliminate the gender pay gap. That’s a good example. I mean you think, “Well no, that’s not—like there’s nothing pathological about that.” It’s like, “Oh yes there is!”
You have to set up a bureaucratic inquisition to ensure that that’s the case. It’s like—it’s not good. And that’s actually a relatively—like, of all the things that you could push for with regards to equality of outcome, that’s rather simple and definable. It’s not even murky. Once it starts to get murky it’s just complex beyond any rectification. You cannot win if you play identity politics. There’s a bunch of reasons like—here’s one: “Let’s push for equality of outcome.” All right, who measures it? That’s a big problem. It’s not a little problem. It’s not like, “We’ll figure that out later.” Oh no, no, no. The measurement problem is paramount. So you don’t solve that, you don’t solve the problem at all. Who measures it? “A bureaucracy.” Okay, which bureaucracy? “Well, a large one that has its fingers everywhere.” Okay, that’s problem number one. And it’s staffed by exactly the sort of people that you don’t want to staff it, by the way.
Next problem. Which identities? That’s the intersectional problem. The radical Leftists have already hit the problem of intersectionality. It’s like, “Well, we’ve got race and gender, let’s say.” Well, okay, what about the intersection between race and gender? That’s a multiplicative intersection, right? So you might start with three racial categories and two gender categories. But you end up with six intersectional categories. And then you’re just getting started. How many genders? Hypothetically there’s an infinite number. What about racial groupings? Are you going to include ethnicity? Do you want to add class to that? Do you want to add socioeconomic class? How about attractiveness?
And every time you add another category to the singular entities, you increase the multiplicative entities in a multiplicative fashion. What are you going to do? Are you going to equate across all those categories? Really? And across what dimensions? What are the dimensions of equality that you want to establish? It’s just socioeconomic? Is it just salary? What about all the other ways that people are unequal? Are you just going to stop with economic inequality? Are you? It’s a complete bloody catastrophe. It’s an absolute mess.
And intersectionality, the discovery of intersectionality on the Left, is actually the radical Left’s discovery of the fundamental flaw in their identity politics ideology. Groups can be multiplied without limit. That’s not a problem; that’s a fatal flaw. And they’ve already discovered it, they just haven’t figured it out.
The reason that the West privileges the individual is because we figured out 2,000 years ago, 3,000 years ago, that you can fractionate group identity appropriately right down to the level of the individual.
What is political extremism? Professor of psychology Jordan Peterson points out that America knows what right-wing radicalism looks like: The doctrine of racial superiority is where conservatives have drawn the line. "What’s interesting is that on the conservative side of the spectrum we’ve figured out how to box-in the radicals and say, 'No, you’re outside the domain of acceptable opinion,'" says Peterson. But where's that line for the Left? There is no universal marker of what extreme liberalism looks like, which is devastating to the ideology itself but also to political discourse as a whole. Fortunately, Peterson is happy to suggest such a marker: "The doctrine of equality of outcome. It seems to me that that’s where people who are thoughtful on the Left should draw the line, and say no. Equality of opportunity? [That's] not only fair enough, but laudable. But equality of outcome…? It’s like: 'No, you’ve crossed the line. We’re not going there with you.'" Peterson argues that it's the ethical responsibility of left-leaning people to identify liberal extremism and distinguish themselves from it the same way conservatives distance themselves from the doctrine of racial superiority. Failing to recognize such extremism may be liberalism's fatal flaw. Jordan Peterson is the author of 12 Rules for Life: An Antidote to Chaos.
Big ideas.
Once a week.
Subscribe to our weekly newsletter.
Psychopath-ish: How “healthy” brains can look and function like those of psychopaths
A recent study used fMRI to compare the brains of psychopathic criminals with a group of 100 well-functioning individuals, finding striking similarities.
- The study used psychological inventories to assess a group of violent criminals and healthy volunteers for psychopathy, and then examined how their brains responded to watching violent movie scenes.
- The fMRI results showed that the brains of healthy subjects who scored high in psychopathic traits reacted similarly as the psychopathic criminal group. Both of these groups also showed atrophy in brain regions involved in regulating emotion.
- The study adds complexity to common conceptions of what differentiates a psychopath from a "healthy" individual.
When considering what precisely makes someone a psychopath, the lines can be blurry.
Psychological research has shown that many people in society have some degree of malevolent personality traits, such as those described by the "dark triad": narcissism (entitled self-importance), Machiavellianism (strategic exploitation and deceit), and psychopathy (callousness and cynicism). But while people who score high in these traits are more likely to end up in prison, most of them are well functioning and don't engage in extreme antisocial behaviors.
Now, a new study published in Cerebral Cortex found that the brains of psychopathic criminals are structurally and functionally similar to many well-functioning, non-criminal individuals with psychopathic traits. The results suggest that psychopathy isn't a binary classification, but rather a "constellation" of personality traits that "vary in the non-incarcerated population with normal range of social functioning."
Assessing your inner psychopath
The researchers used functional magnetic resonance imaging (fMRI) to compare the brains of violent psychopathic criminals to those of healthy volunteers. All participants were assessed for psychopathy through commonly used inventories: the Hare Psychopathy Checklist-Revised and the Levenson Self-Report Psychopathy Scale.
Experimental design and sample stimuli. The subjects viewed a compilation of 137 movie clips with variable violent and nonviolent content.Nummenmaa et al.
Both groups watched a 26-minute-long medley of movie scenes that were selected to portray a "large variability of social and emotional content." Some scenes depicted intense violence. As participants watched the medley, fMRI recorded how various regions of their brains responded to the content.
The goal was to see whether the brains of psychopathic criminals looked and reacted similarly to the brains of healthy subjects who scored high in psychopathic traits. The results showed similar reactions: When both groups viewed violent scenes, the fMRI revealed strong reactions in the orbitofrontal cortex and anterior insula, brain regions associated with regulating emotion.
These similarities manifested as a positive association: The more psychopathic traits a healthy subject displayed, the more their brains responded like the criminal group. What's more, the fMRI revealed a similar association between psychopathic traits and brain structure, with those scoring high in psychopathy showing lower gray matter density in the orbitofrontal cortex and anterior insula.
There were some key differences between the groups, however. The researchers noted that the structural abnormalities in the healthy sample were mainly associated with primary psychopathic traits, which are: inclination to lie, lack of remorse, and callousness. Meanwhile, the functional responses of the healthy subjects were associated with secondary psychopathic traits: impulsivity, short temper, and low tolerance for frustration.
Overall, the study further illuminates some of the biological drivers of psychopathy, and it adds nuance to common conceptions of the differences between psychopathy and being "healthy."
Why do some psychopaths become criminals?
The million-dollar question remains unanswered: Why do some psychopaths end up in prison, while others (or, people who score high in psychopathic traits) lead well-functioning lives? The researchers couldn't give a definitive answer, but they did note that psychopathic criminals had lower connectivity within "key nodes of the social and emotional brain networks, including amygdala, insula, thalamus, and frontal pole."
"Thus, even though there are parallels in the regional responsiveness of the brain's affective circuit in the convicted psychopaths and well-functioning subjects with psychopathic traits, it is likely that the disrupted functional connectivity of this network is specific to criminal psychopathy."
Fighting online misinformation: We're doing it wrong
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
A historian identifies the worst year in human history
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
Self-awareness is what makes us human
Because of our ability to think about thinking, "the gap between ape and man is immeasurably greater than the one between amoeba and ape."
- Self-awareness — namely, our capacity to think about our thoughts — is central to how we perceive the world.
- Without self-awareness, education, literature, and other human endeavors would not be possible.
- Striving toward greater self-awareness is the spiritual goal of many religions and philosophies.
The following is an excerpt from Dr. Stephen Fleming's forthcoming book Know Thyself. It is reprinted with permission from the author.
I now run a neuroscience lab dedicated to the study of self-awareness at University College London. My team is one of several working within the Wellcome Centre for Human Neuroimaging, located in an elegant town house in Queen Square in London. The basement of our building houses large machines for brain imaging, and each group in the Centre uses this technology to study how different aspects of the mind and brain work: how we see, hear, remember, speak, make decisions, and so on. The students and postdocs in my lab focus on the brain's capacity for self-awareness. I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
Until quite recently, however, this all seemed like nonsense. As the nineteenth-century French philosopher Auguste Comte put it: "The thinking individual cannot cut himself in two — one of the parts reasoning, while the other is looking on. Since in this case the organ observed and the observing organ are identical, how could any observation be made?" In other words, how can the same brain turn its thoughts upon itself?
Comte's argument chimed with scientific thinking at the time. After the Enlightenment dawned on Europe, an increasingly popular view was that self-awareness was special and not something that could be studied using the tools of science. Western philosophers were instead using self-reflection as a philosophical tool, much as mathematicians use algebra in the pursuit of new mathematical truths. René Descartes relied on self-reflection in this way to reach his famous conclusion, "I think, therefore I am," noting along the way that "I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind." Descartes proposed that a central soul was the seat of thought and reason, commanding our bodies to act on our behalf. The soul could not be split in two — it just was. Self-awareness was therefore mysterious and indefinable, and off-limits to science.
Credit: FRED TANNEAU via Getty Images
We now know that the premise of Comte's worry is false. The human brain is not a single, indivisible organ. Instead, the brain is made up of billions of small components — neurons — that each crackle with electrical activity and participate in a wiring diagram of mind-boggling complexity. Out of the interactions among these cells, our entire mental life — our thoughts and feelings, hopes and dreams — flickers in and out of existence. But rather than being a meaningless tangle of connections with no discernible structure, this wiring diagram also has a broader architecture that divides the brain into distinct regions, each engaged in specialized computations. Just as a map of a city need not include individual houses to be useful, we can obtain a rough overview of how different areas of the human brain are working together at the scale of regions rather than individual brain cells. Some areas of the cortex are closer to the inputs (such as the eyes) and others are further up the processing chain. For instance, some regions are primarily involved in seeing (the visual cortex, at the back of the brain), others in processing sounds (the auditory cortex), while others are involved in storing and retrieving memories (such as the hippocampus).
In a reply to Comte in 1865, the British philosopher John Stuart Mill anticipated the idea that self-awareness might also depend on the interaction of processes operating within a single brain and was thus a legitimate target of scientific study. Now, thanks to the advent of powerful brain imaging technologies such as functional magnetic resonance imaging (fMRI), we know that when we self-reflect, particular brain networks indeed crackle into life and that damage or disease to these same networks can lead to devastating impairments of self-awareness.
I often think that if we were not so thoroughly familiar with our own capacity for self-awareness, we would be gobsmacked that the brain is able to pull off this marvelous conjuring trick. Imagine for a moment that you are a scientist on a mission to study new life-forms found on a distant planet. Biologists back on Earth are clamoring to know what they're made of and what makes them tick. But no one suggests just asking them! And yet a Martian landing on Earth, after learning a bit of English or Spanish or French, could do just that. The Martians might be stunned to find that we can already tell them something about what it is like to remember, dream, laugh, cry, or feel elated or regretful — all by virtue of being self-aware.
I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
But self-awareness did not just evolve to allow us to tell each other (and potential Martian visitors) about our thoughts and feelings. Instead, being self-aware is central to how we experience the world. We not only perceive our surroundings; we can also reflect on the beauty of a sunset, wonder whether our vision is blurred, and ask whether our senses are being fooled by illusions or magic tricks. We not only make decisions about whether to take a new job or whom to marry; we can also reflect on whether we made a good or bad choice. We not only recall childhood memories; we can also question whether these memories might be mistaken.
Self-awareness also enables us to understand that other people have minds like ours. Being self-aware allows me to ask, "How does this seem to me?" and, equally importantly, "How will this seem to someone else?" Literary novels would become meaningless if we lost the ability to think about the minds of others and compare their experiences to our own. Without self-awareness, there would be no organized education. We would not know who needs to learn or whether we have the capacity to teach them. The writer Vladimir Nabokov elegantly captured this idea that self-awareness is a catalyst for human flourishing:
"Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follow s— the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
In light of these myriad benefits, it's not surprising that cultivating accurate self-awareness has long been considered a wise and noble goal. In Plato's dialogue Charmides, Socrates has just returned from fighting in the Peloponnesian War. On his way home, he asks a local boy, Charmides, if he has worked out the meaning of sophrosyne — the Greek word for temperance or moderation, and the essence of a life well lived. After a long debate, the boy's cousin Critias suggests that the key to sophrosyne is simple: self-awareness. Socrates sums up his argument: "Then the wise or temperate man, and he only, will know himself, and be able to examine what he knows or does not know…No other person will be able to do this."
Likewise, the ancient Greeks were urged to "know thyself" by a prominent inscription carved into the stone of the Temple of Delphi. For them, self-awareness was a work in progress and something to be striven toward. This view persisted into medieval religious traditions: for instance, the Italian priest and philosopher Saint Thomas Aquinas suggested that while God knows Himself by default, we need to put in time and effort to know our own minds. Aquinas and his monks spent long hours engaged in silent contemplation. They believed that only by participating in concerted self-reflection could they ascend toward the image of God.
Credit: Dimas Ardian via Getty Images
A similar notion of striving toward self-awareness is seen in Eastern traditions such as Buddhism. The spiritual goal of enlightenment is to dissolve the ego, allowing more transparent and direct knowledge of our minds acting in the here and now. The founder of Chinese Taoism, Lao Tzu, captured this idea that gaining self-awareness is one of the highest pursuits when he wrote, "To know that one does not know is best; Not to know but to believe that one knows is a disease."
Today, there is a plethora of websites, blogs, and self-help books that encourage us to "find ourselves" and become more self-aware. The sentiment is well meant. But while we are often urged to have better self-awareness, little attention is paid to how self-awareness actually works. I find this odd. It would be strange to encourage people to fix their cars without knowing how the engine worked, or to go to the gym without knowing which muscles to exercise. This book aims to fill this gap. I don't pretend to give pithy advice or quotes to put on a poster. Instead, I aim to provide a guide to the building blocks of self-awareness, drawing on the latest research from psychology, computer science, and neuroscience. By understanding how self-awareness works, I aim to put us in a position to answer the Athenian call to use it better.
