Drugs: What America gets wrong about addiction and policy
Addiction is not a moral failure. It is a learning disorder, and viewing it otherwise stops communities and policy makers from the ultimate goal: harm reduction.
MAIA SZALAVITZ: Addiction is compulsive behavior despite negative consequences, and it's really important to start by defining addiction because, for a long time, we really defined it very poorly. We used to think that addiction was needing a substance to function and what that resulted in was that cocaine was not addictive because cocaine does not produce physical withdrawal that is noticeable. You may be cranky and irritable and crave cocaine, but you won't be puking and shaking and have the classic symptoms that you would see with alcohol or heroin withdrawal. So, cocaine wasn't addictive. Then crack came. And we realized that defining addiction in that way not only harms people by telling them that cocaine is not addictive. It also harms pain patients because people who take opioids daily for pain will develop physical dependence, but they are not addicted unless they have compulsive behavior despite negative consequences.
CARL HART: Addiction. Typically we think of it as people may exhibit tolerance to a substance. They may go through withdrawal when they don't have the substance. They may spend an increasing amount of time engaged in the behavior to obtain or use the substance. They may have had unsuccessful, a number of unsuccessful attempts to cut down their use of the substance. They may use despite the fact that they are having psychological or physical problems. These are the hallmarks of addiction.
MAIA SZALAVITZ: Addiction is a learning disorder because it can't occur without learning. You have to learn to associate the drug with some kind of relief or pleasure, and you need to do that repeatedly over time before you can become addicted.
CARL HART: Crack in the mid-1980s, one of the worst myths is that one hit, and you are addicted for life. We saw that in the 1980s and we are seeing it again with methamphetamine today: 'One hit and you are addicted.' And it's simply not true. Addiction requires work. Not that people should go out and experiment or do this themselves, but the fact is that's a myth. And the concern is that it's dangerous because when people perpetuate such myths and then when young people or people actually try methamphetamine or crack cocaine and find that that doesn't happen to them, now they disregard everything that comes from these official sources.
MAIA SZALAVITZ: So the learning is involved where you learn that this works to fix a problem and you basically then fall in love with the substance. And once you've fallen in love with somebody or something, you will persist despite negative consequences in order to sustain that relationship because the biology is going to tell you that your life depends on this. It basically acts in a brain region that is involved in survival and reproduction and those are the two fundamental purposes of biology. So that creates really, really strong cravings and it changes your priorities tremendously. This is a condition that can affect not just anybody but people who are in some sort of emotional pain. Addiction kicks people who are already down.
In order to overcome addiction you need to figure out what purpose the addiction was serving. In my case I had a lot of depression and I was very, I had a lot of difficulties connecting with people. I was also sort of overwhelmed by my senses and emotions a lot of the time and opioids turned that down very nicely. So I needed to sort of figure out what was up and deal with those issues in order to be healthy and comfortable in recovery. And that's going to be different for different people because they are going to have different issues that they are medicating with the drugs.
So, I think that the way to get beyond that and the way to help people with addiction is to understand that people with addiction are not seeking extra pleasure. They are not hedonists who are just out there having so much fun using that you can't stop them unless you put them in jail. People who get addicted, which are only 10 to 20 percent of the people who use drugs like cocaine and heroin and prescription opioids, those people have problems. The drugs seem like a solution to them. Until we recognize that those people are seeking, reasonably rationally, to deal with emotional and psychological problems and sometimes economic problems we are not going to solve this problem. And one of the things that I think we're actually in denial about with regard to the opioid epidemic is that while big pharma certainly didn't do anything good here, what they—to say that this is caused by Purdue Pharma selling Oxycontin is to miss the fact that the people who are overwhelmingly becoming addicted are people who are either falling out of the middle class or never managed to get into it. If you actually look at the economics of this problem it's not that middle class people certainly don't get addicted and it's not that rich people don't get addicted, it's just that if your life is despair and you feel like it will never get any better, which is often the case when you lose the American Dream or you lose the hope for your future, opioids are going to become very attractive and the idea that we can solve this by taking away the supply is just ridiculous. I mean as soon as we started cracking down on the pill mills we started seeing a rise in heroin use. This is not an unpredictable outcome.
ETHAN NADELMANN: Why are some drugs legal and others illegal? Why are cigarettes and alcohol legal and pharmaceuticals in the middle and these other drugs, marijuana, illegal? Some people sort of inherently assume 'Well, this must be because there was a thoughtful consideration of the relative risks of drugs.' But then you think well, that can't be because we know alcohol is more associated with violence than almost any illegal drugs and cigarettes are more addictive than any of the illegal drugs. Heroin addicts routinely say it's harder to quit cigarettes than it is to quit heroin. So, it's not as if there was ever any kind of national academy of science that a hundred years ago decided that these drugs, these ones had to be illegal and those ones legal. And it's not as if this is in the bible or in the Code of Hammurabi. I mean, nobody was making legal distinctions among many of these drugs back until the twentieth century, essentially. So, if you ask how and why this distinction got made, what you realize when you look at the history is it has almost nothing to do with the relative risks of these drugs and almost everything to do with who used and who was perceived to use these drugs.
So, back in the 1870s when the majority of opiate consumers were middle-aged white women throughout the country using them for their aches and pains and the time of the month and menopause and there was no aspirin, there was no penicillin, lots of diarrhea because of bad sanitation and nothing stops you up like opiates. I mean millions, many more, a much higher percentage of the population back then used opiates than now, but nobody thought about criminalizing it because nobody wanted to put Auntie or Grandma behind bars. But then when the Chinese started coming to the country in large numbers in the 1870s and 1880s and working on the railroads and working in the mines and working in factories and then going back home at the end of the night to smoke up a little opium the way they did in the old country—the same way white people were having a couple of whiskeys in the evening—and that's when you got the first opium prohibition laws in Nevada, in California, in the 1870s and 1880s directed at the Chinese minorities. It was all about the fear 'What were those Chinamen with their opium do to our precious women addicting them and seducing them and turning them into sex slaves' and all this sort of stuff.
The first anti-cocaine laws were in the South in the early part of the twentieth century, directed at black men working on the docks and the fear of 'What would happen to those black men when they took that white powder up their black noses and forgot their proper place in society?' The first time anybody ever said that cops needed a .38 would not bring down a negro crazed on cocaine, you needed a .45. I mean, The New York Times, the paper of record, was reporting this stuff as fact back in those days. That's when you got the first cocaine prohibition laws.
The first marijuana prohibition laws were in the Midwest and the Southwest directed at Mexican migrants, Mexican Americans, taking the good jobs from the good white people. Going back home to their communities, smoking a little of that funny, smoking marijuana, reefer cigarette. And once again the fear: What would this minority do to our precious women and children? I mean it's always been about that. And it wasn't as if the white Americans weren't also consuming. It's just many of them knew that when you criminalize a vice that is engaged in by a huge minority of the population and you leave it inevitably to the discretion of law enforcement as to how to enforce those laws, those laws are not typically going to be enforced against the whiter and wealthier and more affluent or middle class members of society. Inevitably those laws will be disproportionally enforced against the poorer and younger and darker-skinned members of society. So, to some very good extent that's really what the war on drugs has been about.
MAIA SZALAVITZ: So, this is where our laws come from and we have to be honest about that and we have to stop pretending that there is some kind of rational basis for the laws that we currently have. The reason that we continue to have these stereotypes about who drug users are is because of the ongoing racism of our society. And until we acknowledge that, like, I am the typical drug user if there is such a thing. I don't look like your stereotype, but that doesn't mean that the stereotype is accurate. So, I think that's a really important thing that people really have to learn because for too long the media has enabled the racist view of addiction and has enabled people to say, oh, I'm not the typical addict. And I used to say that and then I realized wow, that's kind of racist. And it comes from images that we shouldn't have ever had.
What I do think is interesting about the future of drugs is that we can make better drugs. Part of the reason that prohibition is collapsing at the moment is because of what are called new psychoactive substances or legal highs. And basically you can make a new recreational drug by tweaking molecules of the other ones and it will be technically legal because it hasn't been made illegal. And what this reveals is that our system for making drugs illegal is completely irrational and based on nineteenth-century prejudices. It has nothing to do with science.
This idea that we could use a drug that will block the effects of the drug of choice is generally misguided because the problem isn't the drug of choice. The problem is why you need that drug and why those drugs appeal to you and why you are trying to get out. Why you are trying to escape and what you need in your life in order to feel comfortable and safe and productive.
MAIA SZALAVITZ: I think the most important place to start is that addiction is a learning disorder. It's not a sign that you are a bad person. And if you want to have a safe and addiction-free or at least lower level addiction workplace or school, you want people to feel included and comfortable and safe and you don't want this to be an adversarial thing. The research shows that the best way to get people help is through compassion and empathy and support. And absolutely not tough love. Help them realize that this is not a sin. I am not trying to control you. What I want to do is for you to be at your best—at work, at home. And you're not being at your best right now, so what can we do to help?
And I have to say, it's almost never going to be easy because people whether they have addiction or mental illness or anything else going on with them, often don't want to admit to themselves that there's a problem. In the addictions field, there's been this whole thing, 'We've got to break through denial' and everything like that. Well, people have denial for good reasons. If we didn't have denial everybody would be sitting around obsessing about death—or at least I would be. It's a defense mechanism because we need defending. So, recognizing that can allow you to approach somebody not from an attacking stance; approach somebody from a befriending sort of stance. And that is hard to do and some people are going to get very defensive no matter what you do. And it's not going to be a pleasant conversation most of the time, but you can minimize harm. This whole thing always comes down to reducing harm, making things less unpleasant if you can't make them non-unpleasant.
And I think really important in getting people into any kind of treatment is that—and I always say this to parents or anybody who has an addictive loved one—the first step should always be a complete, thorough psychiatric evaluation by somebody who is not affiliated with any treatment organization. So that you can know going in what the problems may be and what kind of services you should be seeking.
I should say methadone and buprenorphine, the opioid agonists, are the best treatments that we have for opioid addiction and what they do is two things. The first thing is they cut the death rate by 50 percent which this happens whether you continue using on top or not. So that's sheer harm reduction and that's wonderful. If we can keep you alive long enough that you stabilize your life that is a lot better than having you die. The other thing that they do is they allow people who are ready to stabilize their lives—so, you couldn't tell right now if I was on a maintenance treatment or not because basically once you get a tolerance to these drugs you are not high or impaired and you can drive and you can work and you can love and you can do all of these things. What we don't understand is we think oh, you've just substituted one addiction for another. No, what you've done is you've substituted compulsive behavior despite negative consequences and now you just have physical dependence and that's not a real problem as long as you have a safe and legal supply.
We also have this idea that you can't provide these medications without also providing counseling, and we don't do that for any other medical service. We don't say, 'Oh, you can only get your insulin if you do X counseling on diet' or whatever. We realize that people need the tools to stay alive regardless of if they're improving as quickly as we would like them to do and forced counseling doesn't actually help anyway. So, what we should do is we should have different thresholds for treatment. So with buprenorphine some people may just want to show up and get a dose and that's it. And that will work as sheer harm reduction. That should be available in emergency rooms. Then what we need to do is realize that you can't make policy based on, 'I think it's bad for you to have unearned pleasure.'
MAIA SZALAVITZ: You have to make policy based on: Does this hurt you, does this hurt other people? And that's where harm reduction comes from. The basic idea of harm reduction is: What policy will most reduce the harm related to drugs? And once you start to focus on harm you have to look not only at harm associated with drugs, but harm associated with drug policy. And this is why so many harm reduction people rapidly become legalizers, because the harm associated with drug prohibition has not produced the results that people would like. It does not stop addiction. It does not prevent kids from using drugs. It makes the kids who use drugs be at higher risk of dying from them. It doesn't save society's productivity by keeping people from taking substances that will make them not work. It just doesn't work and when you think about it, if addiction is defined as compulsive behavior despite negative consequences and you're trying to use negative consequences in order to stop it, something is seriously wrong there.
So our drug policy has to acknowledge the reality that punishment doesn't fix addiction and that putting drug users in cages does nothing but worsen the problem, and it doesn't deter kids. Kids are going to do stupid risky things. You want to reduce the chances that those things will kill them. The idea that we can prevent adolescents from having sex or prevent adolescents from doing some kind of risky behavior is just absurd. This comes out before humans even evolved.
CARL HART: People who are young today won't be the same folks tomorrow. And so as new generations come about they have to find their own way, not only with drugs—they find their own way with fashion, they find their own way with the way they wear their hair. A wide range of domains in which they find their way, and drugs is just one of them. It's not special, it's not unique. You see some generations really being into LSD or into psychedelics in general whereas other generations are really into the stimulants. By that same token you found that some generations were into bell bottoms, other generations were into straight-leg pants. And so I think that as each generation finds their way they will also select their psychoactive intoxicants similarly.
JEFFREY MIRON: By trying to discourage people using drugs and trying to discourage the genuine unfortunate circumstances which happen sometimes because of drug use, we incur far worse negative outcomes, far worse cost than would result simply from the use of drugs in a legal framework. So, what are all these adverse consequences of attempting to prohibit drugs? Well, to begin with we don't actually eliminate drugs. We drive the market underground. And the underground market for drugs is violent, it's corrupt, it has poor quality control and in the attempt to enforce it we have to infringe civil liberties by basically shredding the Fourth Amendment to the Constitution. We reduce the ability of people who are sick to use drugs like marijuana or opiates freely to reduce pain, to relieve nausea from chemotherapy and a whole range of other symptoms. We interfere in other countries. The violence that we observe in Mexico, the profitability underlying the Taliban in Afghanistan. All those result from the fact that we've driven drug markets underground and so terrorist groups make a profit by selling their protection services to the drug traffickers. The drug traffickers get protection and the terrorists get profits, so that's another ancillary cost of trying to wage the war on drugs.
So my view is that if we had a fully legal market for all of these substances we would observe roughly the same set of things we observe now for alcohol, for caffeine, for tobacco, for other products which can be dangerous. We would see a large fraction of people use them in moderation, use them reasonably responsible with at most mild negatives for themselves or for others. We would see a small fraction who would misuse them in bad ways but mainly they would adversely affect themselves, not the rest of society. And that's a far better balance, and in no way, shape or form a solution in the sense of eliminating all negatives, but a far better balance than the current policy of trying to prohibit drugs.
- "Why are some drugs legal and others illegal? ... if you ask how and why this distinction got made, what you realize when you look at the history is it has almost nothing to do with the relative risks of these drugs and almost everything to do with who used and who was perceived to use these drugs," says Ethan Nadelmann.
- In this video, Maia Szalavitz, public policy and addiction journalist; Carl Hart, professor of neuroscience and psychology at Columbia University; Ethan Nadelmann, founder of the Drug Policy Alliance; and Harvard University economist Jeffrey Miron dissect why American society's perceptions of drug addiction and its drug policies are so illogical.
- Drug addiction is not a moral failure and the stereotypes about who gets addicted are not true. Policy that is built to punish drug users for their immorality only increases harm and death rates.
- Our drug addiction is destroying marine ecosystems - Big Think ›
- Why are intelligent people more likely to abuse drugs? - Big Think ›
- Scientists find a novel way of eradicating drug addiction—zapping ... ›
Once a week.
Subscribe to our weekly newsletter.
A recent study used fMRI to compare the brains of psychopathic criminals with a group of 100 well-functioning individuals, finding striking similarities.
- The study used psychological inventories to assess a group of violent criminals and healthy volunteers for psychopathy, and then examined how their brains responded to watching violent movie scenes.
- The fMRI results showed that the brains of healthy subjects who scored high in psychopathic traits reacted similarly as the psychopathic criminal group. Both of these groups also showed atrophy in brain regions involved in regulating emotion.
- The study adds complexity to common conceptions of what differentiates a psychopath from a "healthy" individual.
When considering what precisely makes someone a psychopath, the lines can be blurry.
Psychological research has shown that many people in society have some degree of malevolent personality traits, such as those described by the "dark triad": narcissism (entitled self-importance), Machiavellianism (strategic exploitation and deceit), and psychopathy (callousness and cynicism). But while people who score high in these traits are more likely to end up in prison, most of them are well functioning and don't engage in extreme antisocial behaviors.
Now, a new study published in Cerebral Cortex found that the brains of psychopathic criminals are structurally and functionally similar to many well-functioning, non-criminal individuals with psychopathic traits. The results suggest that psychopathy isn't a binary classification, but rather a "constellation" of personality traits that "vary in the non-incarcerated population with normal range of social functioning."
Assessing your inner psychopath
The researchers used functional magnetic resonance imaging (fMRI) to compare the brains of violent psychopathic criminals to those of healthy volunteers. All participants were assessed for psychopathy through commonly used inventories: the Hare Psychopathy Checklist-Revised and the Levenson Self-Report Psychopathy Scale.
Experimental design and sample stimuli. The subjects viewed a compilation of 137 movie clips with variable violent and nonviolent content.Nummenmaa et al.
Both groups watched a 26-minute-long medley of movie scenes that were selected to portray a "large variability of social and emotional content." Some scenes depicted intense violence. As participants watched the medley, fMRI recorded how various regions of their brains responded to the content.
The goal was to see whether the brains of psychopathic criminals looked and reacted similarly to the brains of healthy subjects who scored high in psychopathic traits. The results showed similar reactions: When both groups viewed violent scenes, the fMRI revealed strong reactions in the orbitofrontal cortex and anterior insula, brain regions associated with regulating emotion.
These similarities manifested as a positive association: The more psychopathic traits a healthy subject displayed, the more their brains responded like the criminal group. What's more, the fMRI revealed a similar association between psychopathic traits and brain structure, with those scoring high in psychopathy showing lower gray matter density in the orbitofrontal cortex and anterior insula.
There were some key differences between the groups, however. The researchers noted that the structural abnormalities in the healthy sample were mainly associated with primary psychopathic traits, which are: inclination to lie, lack of remorse, and callousness. Meanwhile, the functional responses of the healthy subjects were associated with secondary psychopathic traits: impulsivity, short temper, and low tolerance for frustration.
Overall, the study further illuminates some of the biological drivers of psychopathy, and it adds nuance to common conceptions of the differences between psychopathy and being "healthy."
Why do some psychopaths become criminals?
The million-dollar question remains unanswered: Why do some psychopaths end up in prison, while others (or, people who score high in psychopathic traits) lead well-functioning lives? The researchers couldn't give a definitive answer, but they did note that psychopathic criminals had lower connectivity within "key nodes of the social and emotional brain networks, including amygdala, insula, thalamus, and frontal pole."
"Thus, even though there are parallels in the regional responsiveness of the brain's affective circuit in the convicted psychopaths and well-functioning subjects with psychopathic traits, it is likely that the disrupted functional connectivity of this network is specific to criminal psychopathy."
Counterintuitively, directly combating misinformation online can spread it further. A different approach is needed.
- Like the coronavirus, engaging with misinformation can inadvertently cause it to spread.
- Social media has a business model based on getting users to spend increasing amounts of time on their platforms, which is why they are hesitant to remove engaging content.
- The best way to fight online misinformation is to drown it out with the truth.
A year ago, the Center for Countering Digital Hate warned of the parallel pandemics — the biological contagion of COVID-19 and the social contagion of misinformation, aiding the spread of the disease. Since the outbreak of COVID-19, anti-vaccine accounts have gained 10 million new social media followers, while we have witnessed arson attacks against 5G masts, hospital staff abused for treating COVID patients, and conspiracists addressing crowds of thousands.
Many have refused to follow guidance issued to control the spread of the virus, motivated by beliefs in falsehoods about its origins and effects. The reluctance we see in some to get the COVID vaccine is greater amongst those who rely on social media rather than traditional media for their information. In a pandemic, lies cost lives, and it has felt like a new conspiracy theory has sprung up online every day.
How we, as social media users, behave in response to misinformation can either enable or prevent it from being seen and believed by more people.
The rules are different online
Credit: Pool via Getty Images
If a colleague mentions in the office that Bill Gates planned the pandemic, or a friend at dinner tells the table that the COVID vaccine could make them infertile, the right thing to do is often to challenge their claims. We don't want anyone to be left believing these falsehoods.
But digital is different. The rules of physics online are not the same as they are in the offline world. We need new solutions for the problems we face online.
Now, imagine that in order to reply to your friend, you must first hand him a megaphone so that everyone within a five-block radius can hear what he has to say. It would do more damage than good, but this is essentially what we do when we engage with misinformation online.
Think about misinformation as being like the coronavirus — when we engage with it, we help to spread it to everyone else with whom we come into contact. If a public figure with a large following responds to a post containing misinformation, they ensure the post is seen by hundreds of thousands or even millions of people with one click. Social media algorithms also push content into more users' newsfeeds if it appears to be engaging, so lots of interactions from users with relatively small followings can still have unintended negative consequences.
The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology.
Additionally, whereas we know our friend from the office or dinner, most of the misinformation we see online will come from strangers. They often will be from one of two groups — true believers, whose minds are made up, and professional propagandists, who profit from building large audiences online and selling them products (including false cures). Both of these groups use trolling tactics, that is, seeking to trigger people to respond in anger, thus helping them reach new audiences and thereby gaming the algorithm.
On the day the COVID vaccine was approved in the UK, anti-vaccine activists were able to provoke pro-vaccine voices into posting about thalidomide, exposing new audiences to a reason to distrust the medical establishment. Those who spread misinformation understand the rules of the game online; it's time those of us on the side of enlightenment values of truth and science did too.
How to fight online misinformation
Of course, it is much easier for social media companies to take on this issue than for us citizens. Research from the Center for Countering Digital Hate and Anti-Vax Watch last month found that 65% of anti-vaccine content on social media is linked to just twelve individuals and their organizations. Were the platforms to simply remove the accounts of these superspreaders, it would do a huge amount to reduce harmful misinformation.
The problem is that social media platforms are resistant to do so. These businesses have been built by constantly increasing the amount of time users spend on their platforms. Getting rid of the creators of engaging content that has millions of people hooked is antithetical to the business model. It will require intervention from governments to force tech companies to finally protect their users and society as a whole.
So, what can the rest of us do, while we await state regulation?
Instead of engaging, we should be outweighing the bad with the good. Every time you see a piece of harmful misinformation, share advice or information from a trusted source, like the WHO or BBC, on the same subject. The trend of people celebrating and posting photos of themselves or loved ones receiving the vaccine has been far more effective than any attempt to disprove a baseless claim about Bill Gates or 5G mobile technology. In the attention economy that governs tech platforms, drowning out is a better strategy than rebuttal.
Imran Ahmed is CEO of the Center for Countering Digital Hate.
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
Because of our ability to think about thinking, "the gap between ape and man is immeasurably greater than the one between amoeba and ape."
- Self-awareness — namely, our capacity to think about our thoughts — is central to how we perceive the world.
- Without self-awareness, education, literature, and other human endeavors would not be possible.
- Striving toward greater self-awareness is the spiritual goal of many religions and philosophies.
The following is an excerpt from Dr. Stephen Fleming's forthcoming book Know Thyself. It is reprinted with permission from the author.
I now run a neuroscience lab dedicated to the study of self-awareness at University College London. My team is one of several working within the Wellcome Centre for Human Neuroimaging, located in an elegant town house in Queen Square in London. The basement of our building houses large machines for brain imaging, and each group in the Centre uses this technology to study how different aspects of the mind and brain work: how we see, hear, remember, speak, make decisions, and so on. The students and postdocs in my lab focus on the brain's capacity for self-awareness. I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
Until quite recently, however, this all seemed like nonsense. As the nineteenth-century French philosopher Auguste Comte put it: "The thinking individual cannot cut himself in two — one of the parts reasoning, while the other is looking on. Since in this case the organ observed and the observing organ are identical, how could any observation be made?" In other words, how can the same brain turn its thoughts upon itself?
Comte's argument chimed with scientific thinking at the time. After the Enlightenment dawned on Europe, an increasingly popular view was that self-awareness was special and not something that could be studied using the tools of science. Western philosophers were instead using self-reflection as a philosophical tool, much as mathematicians use algebra in the pursuit of new mathematical truths. René Descartes relied on self-reflection in this way to reach his famous conclusion, "I think, therefore I am," noting along the way that "I know clearly that there is nothing that can be perceived by me more easily or more clearly than my own mind." Descartes proposed that a central soul was the seat of thought and reason, commanding our bodies to act on our behalf. The soul could not be split in two — it just was. Self-awareness was therefore mysterious and indefinable, and off-limits to science.
We now know that the premise of Comte's worry is false. The human brain is not a single, indivisible organ. Instead, the brain is made up of billions of small components — neurons — that each crackle with electrical activity and participate in a wiring diagram of mind-boggling complexity. Out of the interactions among these cells, our entire mental life — our thoughts and feelings, hopes and dreams — flickers in and out of existence. But rather than being a meaningless tangle of connections with no discernible structure, this wiring diagram also has a broader architecture that divides the brain into distinct regions, each engaged in specialized computations. Just as a map of a city need not include individual houses to be useful, we can obtain a rough overview of how different areas of the human brain are working together at the scale of regions rather than individual brain cells. Some areas of the cortex are closer to the inputs (such as the eyes) and others are further up the processing chain. For instance, some regions are primarily involved in seeing (the visual cortex, at the back of the brain), others in processing sounds (the auditory cortex), while others are involved in storing and retrieving memories (such as the hippocampus).
In a reply to Comte in 1865, the British philosopher John Stuart Mill anticipated the idea that self-awareness might also depend on the interaction of processes operating within a single brain and was thus a legitimate target of scientific study. Now, thanks to the advent of powerful brain imaging technologies such as functional magnetic resonance imaging (fMRI), we know that when we self-reflect, particular brain networks indeed crackle into life and that damage or disease to these same networks can lead to devastating impairments of self-awareness.
I often think that if we were not so thoroughly familiar with our own capacity for self-awareness, we would be gobsmacked that the brain is able to pull off this marvelous conjuring trick. Imagine for a moment that you are a scientist on a mission to study new life-forms found on a distant planet. Biologists back on Earth are clamoring to know what they're made of and what makes them tick. But no one suggests just asking them! And yet a Martian landing on Earth, after learning a bit of English or Spanish or French, could do just that. The Martians might be stunned to find that we can already tell them something about what it is like to remember, dream, laugh, cry, or feel elated or regretful — all by virtue of being self-aware.
I find it a remarkable fact that something unique about our biology has allowed the human brain to turn its thoughts on itself.
But self-awareness did not just evolve to allow us to tell each other (and potential Martian visitors) about our thoughts and feelings. Instead, being self-aware is central to how we experience the world. We not only perceive our surroundings; we can also reflect on the beauty of a sunset, wonder whether our vision is blurred, and ask whether our senses are being fooled by illusions or magic tricks. We not only make decisions about whether to take a new job or whom to marry; we can also reflect on whether we made a good or bad choice. We not only recall childhood memories; we can also question whether these memories might be mistaken.
Self-awareness also enables us to understand that other people have minds like ours. Being self-aware allows me to ask, "How does this seem to me?" and, equally importantly, "How will this seem to someone else?" Literary novels would become meaningless if we lost the ability to think about the minds of others and compare their experiences to our own. Without self-awareness, there would be no organized education. We would not know who needs to learn or whether we have the capacity to teach them. The writer Vladimir Nabokov elegantly captured this idea that self-awareness is a catalyst for human flourishing:
"Being aware of being aware of being. In other words, if I not only know that I am but also know that I know it, then I belong to the human species. All the rest follow s— the glory of thought, poetry, a vision of the universe. In that respect, the gap between ape and man is immeasurably greater than the one between amoeba and ape."
In light of these myriad benefits, it's not surprising that cultivating accurate self-awareness has long been considered a wise and noble goal. In Plato's dialogue Charmides, Socrates has just returned from fighting in the Peloponnesian War. On his way home, he asks a local boy, Charmides, if he has worked out the meaning of sophrosyne — the Greek word for temperance or moderation, and the essence of a life well lived. After a long debate, the boy's cousin Critias suggests that the key to sophrosyne is simple: self-awareness. Socrates sums up his argument: "Then the wise or temperate man, and he only, will know himself, and be able to examine what he knows or does not know…No other person will be able to do this."
Likewise, the ancient Greeks were urged to "know thyself" by a prominent inscription carved into the stone of the Temple of Delphi. For them, self-awareness was a work in progress and something to be striven toward. This view persisted into medieval religious traditions: for instance, the Italian priest and philosopher Saint Thomas Aquinas suggested that while God knows Himself by default, we need to put in time and effort to know our own minds. Aquinas and his monks spent long hours engaged in silent contemplation. They believed that only by participating in concerted self-reflection could they ascend toward the image of God.
A similar notion of striving toward self-awareness is seen in Eastern traditions such as Buddhism. The spiritual goal of enlightenment is to dissolve the ego, allowing more transparent and direct knowledge of our minds acting in the here and now. The founder of Chinese Taoism, Lao Tzu, captured this idea that gaining self-awareness is one of the highest pursuits when he wrote, "To know that one does not know is best; Not to know but to believe that one knows is a disease."
Today, there is a plethora of websites, blogs, and self-help books that encourage us to "find ourselves" and become more self-aware. The sentiment is well meant. But while we are often urged to have better self-awareness, little attention is paid to how self-awareness actually works. I find this odd. It would be strange to encourage people to fix their cars without knowing how the engine worked, or to go to the gym without knowing which muscles to exercise. This book aims to fill this gap. I don't pretend to give pithy advice or quotes to put on a poster. Instead, I aim to provide a guide to the building blocks of self-awareness, drawing on the latest research from psychology, computer science, and neuroscience. By understanding how self-awareness works, I aim to put us in a position to answer the Athenian call to use it better.