Big Think Interview With Paul Bloom
Paul Bloom is the Brooks and Suzanne Ragen Professor of Psychology at Yale University. An internationally recognized expert on the psychology of child development, social reasoning, and morality, he has won numerous awards for his research, writing, and teaching. Bloom’s previous books include Just Babies: The Origins of Good and Evil and How Pleasure Works: The New Science of Why We Like What We Like, and he has written for Science, Nature, The New York Times, and The New Yorker.
Question: What challenges arise in doing research with young children?\r\n
Paul Bloom: I think there's tremendous insight to be gained by looking at babies and young children. It's basically a way of seeing human nature before it gets tainted and corrupted by culture and if you test them young enough, before it gets even affected by language. So it's a way of seeing human nature in a very direct and sort of untainted way. But it's really difficult working with kids and with babies because they are not cooperative subjects, they are not socialized into the idea that they should cheerfully and cooperatively give you information. They're not like undergraduates, who you can bribe with beer money or course credit. And so you need to be somewhat clever in designing studies that tap their knowledge. You have to tap their knowledge indirect and sometimes kind of interesting and subtle ways. And even when you do that, you know, some of them are just going to run away from you. Some babies are going to fall asleep and cry. Some kids are going to think it's hilarious to answer every question with the opposite that, you know, they believe to be true.\r\n
And so you have to work around that in all sorts of ways. But as a developmental psychologist, I'm committed to the idea that the benefits outweigh the costs.\r\n
Question: How has your research affected your parenting and vice versa?\r\n
Paul Bloom: It's funny, I don't think, I'll be honest, I don't think anything I've ever learned as a developmental psychologist, as a scientist, has affected how I treat my kids. I think that at this stage of the game, there's a real disjunct between what the science tells us and actual practical applications, and I would distrust someone who told you otherwise.\r\n
But there's been a lot going around on the other way around. I mean, having kids has proven to be this amazing, for me, this amazing source of ideas of anecdotes, of examples, I can test my own kids without human subject permission so they pilot, I pilot my ideas on them. And so it is a tremendous advantage to have kids if you're going to be a developmental psychologist.\r\n
Question: Why does religious belief exist?\r\n
Paul Bloom: Most humans are religious. If you ask most people, they'll tell you they belong to one or another religion. The most common religion on earth is Christianity, but Islam is coming a close second. And then there's just all sorts of religions. They differ in many ways, both in the beliefs that they have and in their practices, but they share certain common properties. So, all religions believe in some sort of supernatural entities, preachers without bodies, but have minds, like gods and spirits and ghosts and angels. All religions believe that one or more of these supernatural created the earth and created animals and created us. And they all believe in different forms, that we can survive the death of our body, that we are immortal.\r\n
And I'm very interested in where these beliefs come from, why they exist at all. And you can imagine different extremes, so some scholars argue that they're social constructions, they're inventions of culture, and that's why they're universal. Others argue they're biological adaptations. They exist because of the selective advantage they gave to our ancestors.\r\n
I have a view which is different from both of those. I think they're accidents. I think they're accidental byproducts of cognitive systems that we've evolved for different reasons. More specifically, we've evolved a highly powerful social cognition. A highly powerful cognitive mechanism for thinking about the mental states of others and evaluating them and judging them. And I think that the system is so powerful that it sometimes leads to certain unpredicted byproducts. Things that we haven't evolved to do. So, for instance, we're highly animistic. We see consciousness and agency and humanity all over, even when, you know, a scientist would tell us it doesn't exist. We're natural creationists. In that when we see structure in the world, like a tree or a tiger, we think somebody must have made that, again, we're on the look out for design. And I think we're natural born dualists. And what I mean by that is, that we see our minds as inherently separate from our bodies. And so this makes possible idea that our mind could survive the destruction of our body, or that it could go to another body, as in reincarnation.\r\n
So, I think that these three habits of mind, animisms, creationism, dualism, are present in all of us. They're not biological adaptations, they're accidents. But I think they're what make religious belief attractive and plausible and universal.\r\n
Question: What connection, if any, exists between religion and morality?\r\n
Paul Bloom: Many people, many religious people, many people in general, think that religion plays an important role in one's moral life. The very strong view, which I think nobody can take seriously, is that without religion we'd be monsters, we wouldn't care about other people. And the existence of atheists, who don't rape and murder and so on, seems to falsify that claim. But what about a more subtle claim, which is that religion on the whole makes you nicer? Well, this isn't a crazy theory, there's actually some statistical evidence in favor of it. So, for instance, religious people in the United States tend to give more to charity than atheists, even when you factor out religious charities, even when you think about things like donating blood or giving to the homeless.\r\n
And so you might think that religion is sort of ramping up your niceness in general, but I think there's another explanation, which is that religious in the United States are part of the vast majority and they tend to be happier and they tend to be parts of communities to a greater extent than atheists. And it might be the happiness and the community nature that explains this difference in giving, not religion per se.\r\n
And the reason why I believe this to be correct, is that you have now societies in Europe, like Scandinavian countries, that are quite a bit atheist, far more than the United States, and if religion makes you nice, you would expect these societies to be monstrous, high rates of crime, of exploiting one another, and none of that's true. In fact, these are swell places to live. These atheist communities are filled with people who don't tend to murder each other, don't tend to rape each other, don't tend to have all sorts of social ills that the United States have.\r\n
So I think the relationship from religion from morality is going to be complicated and interesting. I think it won't be hard to find places where religion corrodes morality, where religious belief leads to people rejecting their gut and appropriate moral feelings and doing monstrous things. Religion is an ideology and all ideologies have that power. But for the most part, I think people can be religious and good and be atheists and good, and that's what the data shows us.\r\n
Question: How does your idea of “common-sense dualism” square with your idea of multiple inner selves?\r\n
Paul Bloom: The claim about dualism, is that we naturally see the world as having physical bodies, things like chairs and tables, and our own bodies, and immaterial souls, things that have beliefs and desires and consciousness. And the fact that we're dualists makes all sorts of beliefs possible. It makes religious beliefs possible, because it makes is possible that we can survive the death of our body. It makes, it shapes how we think about morality in that often when we judge whether something has moral status, when we're thinking like a non-human animal, we'd ask, or a fetus, or an embryo, we might ask, does it or does it not have a soul? This is how people often frame moral issues. So that's my claim about dualism.\r\n
My claim about multiple selves is consistent with this, but it takes in a bit of a different direction. The claim about multiple selves is that as a matter of fact, and this may not be common sense, but as a matter of fact, within each of us, there are different agents fighting it out for control and often these can clash. So a standard example that behavior economists often use is that of someone who wants to diet. It's not, they would say, that you want cake, but you don't have the will to stop yourself from eating it. It's rather that you could think of it as two agents in your head. One is the cake eater, when there's no cake around, lies pretty dormant. The other one is the dieter, doesn't want to have cake. But when there's cake around, the cake eater rises in power and can dominate the dieter. Now, what does it buy you to think in these terms? Well, it lets you make sense of the fact that one self can try to thwart the other self. So someone trying to diet may block the cake eater from access to cake by not buying cake, or by punishing himself for using cake, and so on. And a lot of addictions and compulsions can be seen in terms of a clash. Can be seen on analogy between a clash of two people with different desires. So I think this is consistent with dualism, but it's not the same thing.\r\n
Dualism is a claim about our conception of the world. The multiple selves view is a claim about how things really work.\r\n
Question: How do these internal divisions inform our sense of moral reasoning?\r\n
Paul Bloom: It's an interesting question, how to reconcile what we know about people's moral actions, how morality works and why people choose to do good and evil, from our common sense notion. What I'm most interested in my own research, about adults and kids, is the common sense notion. And the common sense notion very clearly involves notions of good and evil. So universally we see some people, everybody on earth, sees some people around them as nice, decent, fair, honorable, and others as jerks, as bastards. And we believe the good people should be rewarded and the bad people should be punished. So you have that as a human universal.\r\n
What's interesting then is where does this universal come from? And I think we could be informed there by evolutionary thought. So a caricature view of Darwinian evolution says we've evolved to be entirely self-motivating, self-interested, perhaps interested as well in our family, but not beyond that. But over the last many years, there's been a more sort of subtle and complicated notion of evolution which grants the fact that some moral intuitions and moral actions might themselves be biological adaptations. We might have evolved them because they're useful things to have for creatures like us that live in small interconnected groups.\r\n
Now, if that's right, then you should expect a moral intuition and moral actions to show up early on in development, before you get exposed or immersed in the culture. And this is some of the research I'm doing, I'm doing some of this in collaboration with my colleagues, and my wife, Karen, who went to Yale University, in their infant lab, as well as with Kylie Hamlin, who works on these projects, and what we do is we show babies different situations. So, for instance, we might show babies a situation where somebody's trying to get up a hill and one character helps it up the hill. Then the same thing, it's trying to get up a hill, and another character pushes it down the hill. And then we see who do babies like more? Do they like the one who helped it or the one who hindered it? We find to a tremendous extent, babies long before they reach their first birthday, prefer the character that helped the guy up the hill and don't like the one that pushed him down the hill.\r\n
What about reward and punishment? Well, in some recent work that we just completed, we find that, again, one year olds, kids who've just turned one, want to reward the character who helped the other guy up the hill and want to punish the character who pushed the guy down the hill. It gets more sophisticated than that. What if you show babies these interactions and then show them someone else rewarding the good guy and punishing the bad guy? We find babies like such a person. But what if somebody punishes the good guy and rewards the bad guy? Babies hate that person and believe that that person should be punished, too, for his misapplication of reward and punishment.\r\n
Now, I don't think babies know all there is to know about morality. But I think this work suggests, and a lot of other work that sort of coincides with this, suggests that some fundamental, moral understanding is there from the get-go. Part of our evolved understanding, our evolved social capacity to deal with other people.\r\n
Question: How does disgust inform our moral reasoning?\r\n
Paul Bloom: There's a big debate in the field over how we make our moral judgments. And a lot of scientific debates are fairly abstract and don't connect to public policy, but this matters. Because this asks questions like, what underlies our intuitions about abortion? Or gay marriage? Or the war in Afghanistan? And a lot of psychologists push a very hard line here, arguing that our moral intuitions are driven by gut feelings, by gut emotional reactions. We might tell these elaborate stories, "Oh, I think gay marriage is fine because I get a bunch of arguments," but the arguments really have nothing to do with why I've come to my opinion. I've come to my opinion because of my feelings of empathy or anger or disgust because I want to affiliate with other people around me and they share that opinion, because of how I was raised. Many psychologists believe rationality is irrelevant for adult moral judgments. I think that's mistaken. I think that's way too strong. I think that there's demonstration after demonstration of how people's rational, deliberative thoughts can actually shift their moral views, lead to a moral decision, lead to moral action, cause people to change their mind and then change the minds of others. I think unless you allow for an important role or rationality, you're unable to explain moral progress or moral change. You're unable to explain why the fact that right now, you and I have all sorts of different views about gay marriage, racism, slavery, then people 100 years ago, even 50 years ago. So I'm a big fan of rationality.\r\n
But a lot of my research does address the question of the role of the emotions and our moral judgment and our moral decision making and again, we typically use people's attitudes for homosexuality and gay marriage as a case study. And this is some work I've done with David **** at Cornell University, as well as with other colleagues, and what we do, what we find, is that the emotion of disgust plays a very strong role in one's feelings about gay marriage and about gay people more generally. So we find that people who are disgust-sensitive, so we all have various sensitivity, we're all grossed out by some things and not others. Some of us are more grossed our, are grossed out easier than others. Those of us who are easily grossed out tend to be more morally disapproving of certain human activities, including homosexuality.\r\n
Also, you can do this in the lab. So following some work by Jonathan Hite and his colleagues, what we did was, we put people in the lab, we'd ask them questions about all sorts of activities, including their attitudes about gay marriage. For half the people, we'd just put them in a lab. For the other half of people, we evoke disgust. We did that actually by spraying a fart spray in the room, so people get a bit grossed out. Being grossed out makes you meaner, it makes you less approving, it makes you sterner in certain ways.\r\n
So, my view on the interplay between reason and emotion and adult moral judgment is it's a complicated and very rich story. But it seems inescapable that our emotions play a role in our moral judgment, even in cases where we might not know this is happening. And even in cases where we'd rather it wasn't happening.\r\n
Question: What do young children understand about art?\r\n
Paul Bloom: If you read a developmental psychology textbook from 10 years ago, or if you just ask a psychologist, said, "What do kids know about art?" What they would say is, "Children start off as realists about art." So, you know, if they see a picture of a bunny, that looks like a bunny, they'll say, "bunny," because it looks like a bunny. When they draw pictures and name them, they'll name them based on what it looks like. Kids start off realists.\r\n
Now, adults can do all sorts of crazy things. Adults can have abstract art, we can understand that a scribble can be a person, we can understand caricature and nonrepresentational art. But that's adult stuff. Kids start off simple and representational. And I believed that, until I had kids of my own. And so the case I remembered that got me interested in art in the first place is my son, Max, did a painting and it was just, he's a horrible artist, just like me, it was just a scribble of colored paints, and he held it up to me and I said, "What is it?" And he said, "It's an airplane." It didn't look at all like an airplane. And then I got to thinking and I realized children draw all the time and they name their artwork, but their artwork doesn't look like anything until they get old enough to get more proficient. So I wondered what was going on.\r\n
And the idea which I explored, and this is with a then-student of mine, Lori Markson, we explored the question of, whether children actually, when they're naming pictures, are doing something extremely smart. When they're naming pictures, they're not looking at what the picture looks like, rather, they're trying to figure out what the picture was intended to be. So for their own pictures, it's easy. If Max was intending to draw an airplane, so it was an airplane, it didn't matter what it looks like. We did experiments finding this is true even for their understanding of pictures made by adults. This is like two year olds. You look at an object, you make a picture, but it's just a scribble, you show it to the kid and say, "What's this?" Kid will say, "It's that thing over there." They're smart enough to realize that pictures are of what the artist wants them to be.\r\n
And I think we should take the common-sense, traditional notion of what kids know about art and flip it around. I think kids start off with a very abstract understanding of art, linked up with notions like artistic intention. And it's only later that they understand the conventions of realism. It's only later that they might get the idea that for something to be a picture of a bunny, it has to look like a bunny.\r\n
Question: Do adults take the same approach as children to understanding abstract art?\r\n
Paul Bloom: The motivation for a lot of this work on children comes from philosophical analysis of what adults do when appreciating art. So when adults try to make sense or an artwork, we often ask ourselves, even though critics sometimes we shouldn't, but we often ask ourselves, "What was going on in the artist's mind? What was he intending to do? What did she want to depict?" And this is just a core part of how we name artwork, how we categorize artwork, and how we appreciate artwork. Philosophers like Arthur Danto point out that the very notion of what art is, as opposed to other things, Duchamp’s Fountain, which looks a lot like a urinal, but is viewed as an artwork, the very notion of what art is, is based on artistic intent. Something can get transformed from an everyday object to an artwork under the right circumstances of what people want it to be.\r\n
And in fact, in some other research I've done with children, we find that you show children a canvas with paint splashed on it, and you tell, in one group you tell the kids, "Somebody spilled paint on this, what is it?" And they'll say, "It's a mess, it's whatever." But if you tell the same kids, "Somebody worked very hard on this for hours and hours," what they might say is, "It's a painting." Their notion of what a painting is doesn't rest just on what something looks like, rather it rests on their belief about how it was created.\r\n
Question: Why do humans enjoy fiction?\r\n
Paul Bloom: Fiction is such a puzzle. You know, you'd expect as good Darwinian creatures, we would evolve to be fascinated with how the world really is, and we would use language to convey real-world information, we'd be obsessed with knowing the way things are, and we would entirely reject stories that aren't true. They're useless. But that's not the way we work. We love stories. Most average humans spend, I think, most of their day-to-day life reading, watching TV, doing, going to the movies, or day dreaming. We spend all or most of our lives, and imagination is our favorite leisure activity. That's a real puzzle.\r\n
So the question is, why do we get pleasure from doing this? And I think there's two main reasons. One reason is that our system that scans reality and gets pleasure from reality is not perfect and it can be deceived. And what fiction often is, is a form of purposeful deception. If I really would enjoy winning the World Series of Poker, then what I can do to give myself pleasure is to imagine winning the World Series of Poker and that will press the same pleasure buttons that really winning it would. Not to the same extent, it's always more pallid to imagine something than to experience it, but to some extent. If somebody enjoys sexual intercourse, they would probably prefer to have sexual intercourse, but failing that, they could appeal to pornography, that would stimulate their mind in some way as if it was the sort of thing they're aspiring to.\r\n
And one reason why fiction is pleasurable is, it's reality-like. Often we seek out in fiction pleasurable experiences that we want to have in the real world and it's close enough to be enjoyable.\r\n
But that only explains some of fiction. Some of the stores we like, and this is a real puzzle, are unpleasant. People are drawn to tragedies and are drawn to horror movies. Now, it's not hard to explain why I might like to see a movie where a professor wins a million dollars and everything, "Oh, that's great, I can imagine myself in that situation." Why would I want to see a movie where a professor is captured by a sadistic axe murderer and chopped to bits? Why would I seek out something so unpleasant? I think some of what goes on in fiction is that we also use it as a way to imagine alternative worlds and we imagine certain sort of alternative worlds, we imagine worst case scenarios, and we do this because there's an adaptive value to exploring the imagination possibilities that might occur as a way to prepare for them, to plan them. Any athlete thinks ahead to, well, what's going to happen if this happens? What's going to happen if that happens? And these alternatives aren't always pleasant ones. Anybody preparing for a **** or preparing for a combat always imagines situations that are bad situations and do this because it helps prepare for them when they really occur. And I think that's what fiction is. I think what a lot of fiction is, is the imagining of the worst so as to prepare ourselves.\r\n
Now, developmentally, I'm interested in where, in the origin of this appreciation of fiction. We know very young children like stories. We know, and this is some work I've done with Dina Weisburg, who's now **** directors, Dina and I looked at children's understanding of stories, again, they're very sophisticated. So they know that their best friend is real and Batman is make-believe. They also know that Batman thinks that Sponge Bob Square Pants is make-believe, those are in different story worlds, but Batman thinks Robin is real, because they're in the same story world. Incredibly sophisticated knowledge. And a lot of my work has explored what children know about fiction.\r\n
What I'm also interested, more and more recently, is why children enjoy fiction? And I think in part, it is they enjoy fiction that simulates real-world pleasures. But I think they also enjoy fiction that exposes them to worst-case scenarios. I think that even young children are drawn to some extent to tragedy and horror. And we're doing some experiments in my lab to see whether children will actually choose an unpleasant fiction over a pleasant fiction under the right circumstances. And our prediction is that they will.\r\n
Question: Do stories help reconcile what you’ve called our multiple inner selves?\r\n
Paul Bloom: So a different function of fiction that people have been interested in, is as a way to give voice to different parts of ourselves. To simulate, not just situations, but different personalities and different characters. And this shows up in its sharpest form in situations, the more modern situations, where people can create alternative selves. As when we become avatars on, in a computer game. Or when we do various forms of role playing online. And what people will often do in these situations, like second life, for instance, is they'll create a persona and explore it. If I'm a man, I might explore what it would like to be a woman. If I'm aggressive, if I'm passive, I might explore what it's like to be aggressive. And this serves, I think, a useful function in that it's a way, in some way, to put it, perhaps metaphorically, giving voice to the multiple parts of ourselves. And that can help us later figure out how to deal with this, how to cope with these things.\r\n
So you see this all the time in psychodynamic and psychoanalytic exercises, where you speak in different voices. But I think any kid who goes onto Second Life or World of Warcraft or even just, you know, any blogger, gets a chance to explore different facets of him or herself.\r\n
Question: What learning capacities do we lose after childhood?\r\n
Paul Bloom: So one interesting question is, in what ways are children superior to adults? What gifts and capacities do children have that adults lack? And I think that there's two ways of answering that question that give intersecting answers. One is evolutionary, which is what would you expect a child to be good at, given that what childhood is, is a period before maturity where you get everything up to speed. The second one is, developmental psychology and observation. What do we see that children do that's really good and that's better than adults? And I think one answer, for instance, is language. So children are better at learning language than adults, because that's the main task of childhood. By the time you're an adult, you had better know a language and there's not much evolutionary pressure to wire up your brain to learn more, you're done, you should be knowing it by then.\r\n
Children are, I think, better learners regarding motor skills, possibly regarding certain aspects of social interaction. We'd be really screwed if we had to start our life over again as children with our brains right now, because I think we lose the plasticity and flexibility.\r\n
One claim, which I'm not sure about, is whether children are better pretenders and better players than adults. And that I'm not sure, but there's a romantic notion that children know how to play and they know how to pretend and we lose this as adults. But I look around at my own life and the life of my friends and life of other people, and all I see is play and pretend. I see people, you know, playing video games, going to movies, reading books, doing all sorts of things. So I'm not sure that the play part ever goes away, I think that might be just as strong in adults as it is with kids.\r\n
Question: What is the concept behind “How Pleasure Works”?\r\n
Paul Bloom: “How Pleasure Works” explores the pleasures of everyday life. So it starts off with seemingly simple pleasures, like food and sex and love, and then it goes on to pleasures like the pleasure we get from owning certain objects, the pleasure we get from reading books and going to movies, the pleasure we get from art, even the pleasure of religious ritual. And I focus on these pleasures in all sorts of ways and I say different things about them. But the main argument of the book is that pleasure is deep. What I mean by that is, that when we get pleasure from something, we don't just resonate to its surface, superficial appearance, we respond instead to what we think that thing really is, where it came from, what's its history, what's inside it. So for sex, it critically matters for sexual arousal and sexual interest who you think the person is. Do you think it's a man or a woman, is it a relative or a stranger, how old is that person, what's that person's sexual history? All of those factors, invisible to the eye, have a deep affect on sexual interest. For food, it matters enormously what you think you're eating. Not just in sort of the abstract consciousness of what you choose to buy or what you choose to put in your mouth, but in how it tastes. The price of food, how natural it is, how healthy it is, are all considerations that affect your taste experience of it. Maybe the most practical application of this is wine. So there's now several studies showing that the more expensive you believe a bottle of wine to be, the better it will taste to you.\r\n
In some of my own work, I've looked at the pleasure of consumer products. So people will pay more for something and will enjoy it more if they believe it was owned by a celebrity. If they believe it was touched by a celebrity. In fact, a way to make such an object lose value is to wash it thoroughly. They don't want it washed because it's as if it washes off the essence of the person who touches it.\r\n
I got into this work because I'm interested in a claim that cognitive psychologists and philosophers have made over the human mind, which is that we're common-sense essentialists. When we understand something, like when we name it or categorize it, or choose what to do with it, we don't just respond to what it looks like, we respond to what we believe it's deeper essence is. Now, this is commonly been applied to more cold-blooded tasks, like naming and categorization. I wrote this book because I was interested in the idea that essentialism applies more generally and that essentialism can help explain a lot of, some mysteries of our every-day pleasures.\r\n
Question: Why can our pleasure in something change when its essence doesn’t?\r\n
Paul Bloom: I think essentialism is an important part of the story and it's something which common theories of pleasure tend to miss. But I wouldn't argue for a minute against ideas or other factors that affect pleasure. So one factor that affects pleasure is simple experience. Faces, for instance, will look more attractive the longer you look at them. Something psychologists have called the mirror exposure effect. When it comes to music and art, you often get what looks like a sort of inverted U-shape curve. So you start off, when you hear a new song, you might not like it that much, then the more you hear it, the more you like, the more you like it, the more you like it, until it reaches a certain point where boredom kicks in, and now you don't like it any more. And a lot of this sort of U-shaped curve shows up for all sorts of things, from the foods we eat to the books we read, to the people we encounter.\r\n
So I don't doubt for a minute that there are low level processes like mirror exposure and like simple sensory pleasures that exist. But my, the argument I make throughout the book though, is that we tend to overestimate how much of our pleasure is determined by these simple processes. People believe, for instance, that when they taste wine, the pleasure they get from wine, is due to the chemical composition of what they're drinking. It's due to what's hitting their tongue and their nose. And when you tell them that's not true, that their pleasure is easily manipulated by telling them different things about the wine, like where it came from, how much it cost, and so one, people balk at this, they say, "That might work for other people, but not for me, I taste the wine." But one of the great surprises, I think, from psychology, is the extent to which our every-day experiences are shaped by our beliefs. Even in cases where we're unconscious that we have these beliefs and these beliefs are playing that role.\r\n
Question: To what extent are our likes and dislikes innate?\r\n
Paul Bloom: It's a very hard question, looking at human differences, to try to explain their origin. So some people like cheese, I don't like it at all. Some people are gay, others are straight. Some people have radically different tastes in movies and books, in artwork, in ritual. And I'll admit, for the most part, these are mysterious. There's not much evidence that these tastes can be substantially shaped by your upbringing. There's also not much evidence that these tastes are genetically determined. So I think what you have is a picture that goes like this. We have innately universals of tastes. Everybody likes a melody of some sort, everybody responds in some way to sexual stimuli, every baby likes the taste of sweet milk, everybody likes art. But then you get these differences, these differences of cross cultures, different cultures have different music, and these differences within cultures. Your taste in music isn't going to be the same as my taste in music. And music is largely mysterious. One of the great puzzles of modern psychology is to explain these individual differences and I think we're largely at a loss.\r\n
Question: What’s the most unusual or exciting project you’re working on now?\r\n
Paul Bloom: There's a lot going on in my lab that I'm very, very excited about. I'll tell you about one study that's kind of cool, and it's with a graduate student named ****. We're interested in whether children will punish other people. Now that much we actually know, that they do. But the further question is, will they punish other people and suffer to do it? So the design that we came up with is, we put children in a room, we're testing four and five year olds, and we show them somebody in another room who behaves horribly, she knocks over someone else's blocks, she laughs, she's terrible. And we tell the child, and that person says, "I can't wait for my broccoli, I have a big plate of broccoli coming, I can't wait for it." And then the experimenter goes over to the child with a big plate of broccoli and says, "I'm going to bring this over to that person, you want to eat some of it?" And it's set up so that children know that if they eat it, that person won't get their broccoli. And we also know that the kids hate the broccoli, for the most part, we also test them to see if they hate the broccoli. The question we're interested in is will they force broccoli down their mouths to punish a stranger? Are they that instinctively spiteful? And right now, the data looks promising. We have some lovely film clips of children cramming broccoli into their mouth, almost weeping, because they hate the stuff, so as to stop this other person from getting it.\r\n
And I think, this is just part of a line of studies that show how powerful our moral impulses are, how deep the desire to reward the good and punish the bad run in all of us, including young children.
Recorded on November 20, 2009
Interviewed by Austin Allen
Once a week.
Subscribe to our weekly newsletter.
Seawater is raising salt levels in coastal woodlands along the entire Atlantic Coastal Plain, from Maine to Florida.
Permanent flooding has become commonplace on this low-lying peninsula, nestled behind North Carolina's Outer Banks. The trees growing in the water are small and stunted. Many are dead.
Throughout coastal North Carolina, evidence of forest die-off is everywhere. Nearly every roadside ditch I pass while driving around the region is lined with dead or dying trees.
As an ecologist studying wetland response to sea level rise, I know this flooding is evidence that climate change is altering landscapes along the Atlantic coast. It's emblematic of environmental changes that also threaten wildlife, ecosystems, and local farms and forestry businesses.
Like all living organisms, trees die. But what is happening here is not normal. Large patches of trees are dying simultaneously, and saplings aren't growing to take their place. And it's not just a local issue: Seawater is raising salt levels in coastal woodlands along the entire Atlantic Coastal Plain, from Maine to Florida. Huge swaths of contiguous forest are dying. They're now known in the scientific community as “ghost forests."
Deer photographed by a remote camera in a climate change-altered forest in North Carolina. Emily Ury, CC BY-ND
The insidious role of salt
Sea level rise driven by climate change is making wetlands wetter in many parts of the world. It's also making them saltier.
In 2016 I began working in a forested North Carolina wetland to study the effect of salt on its plants and soils. Every couple of months, I suit up in heavy rubber waders and a mesh shirt for protection from biting insects, and haul over 100 pounds of salt and other equipment out along the flooded trail to my research site. We are salting an area about the size of a tennis court, seeking to mimic the effects of sea level rise.
After two years of effort, the salt didn't seem to be affecting the plants or soil processes that we were monitoring. I realized that instead of waiting around for our experimental salt to slowly kill these trees, the question I needed to answer was how many trees had already died, and how much more wetland area was vulnerable. To find answers, I had to go to sites where the trees were already dead.
Rising seas are inundating North Carolina's coast, and saltwater is seeping into wetland soils. Salts move through groundwater during phases when freshwater is depleted, such as during droughts. Saltwater also moves through canals and ditches, penetrating inland with help from wind and high tides. Dead trees with pale trunks, devoid of leaves and limbs, are a telltale sign of high salt levels in the soil. A 2019 report called them “wooden tombstones."
As the trees die, more salt-tolerant shrubs and grasses move in to take their place. In a newly published study that I coauthored with Emily Bernhardt and Justin Wright at Duke University and Xi Yang at the University of Virginia, we show that in North Carolina this shift has been dramatic.
The state's coastal region has suffered a rapid and widespread loss of forest, with cascading impacts on wildlife, including the endangered red wolf and red-cockaded woodpecker. Wetland forests sequester and store large quantities of carbon, so forest die-offs also contribute to further climate change.
Researcher Emily Ury measuring soil salinity in a ghost forest. Emily Bernhardt, CC BY-ND
Assessing ghost forests from space
To understand where and how quickly these forests are changing, I needed a bird's-eye perspective. This perspective comes from satellites like NASA's Earth Observing System, which are important sources of scientific and environmental data.
A 2016 Landsat8 image of the Albemarle Pamlico Peninsula in coastal North Carolina. USGS
Since 1972, Landsat satellites, jointly operated by NASA and the U.S. Geological Survey, have captured continuous images of Earth's land surface that reveal both natural and human-induced change. We used Landsat images to quantify changes in coastal vegetation since 1984 and referenced high-resolution Google Earth images to spot ghost forests. Computer analysis helped identify similar patches of dead trees across the entire landscape.
Google Earth image of a healthy forest on the right and a ghost forest with many dead trees on the left. Emily Ury
The results were shocking. We found that more than 10% of forested wetland within the Alligator River National Wildlife Refuge was lost over the past 35 years. This is federally protected land, with no other human activity that could be killing off the forest.
Rapid sea level rise seems to be outpacing the ability of these forests to adapt to wetter, saltier conditions. Extreme weather events, fueled by climate change, are causing further damage from heavy storms, more frequent hurricanes and drought.
We found that the largest annual loss of forest cover within our study area occurred in 2012, following a period of extreme drought, forest fires and storm surges from Hurricane Irene in August 2011. This triple whammy seemed to have been a tipping point that caused mass tree die-offs across the region.
Should scientists fight the transition or assist it?
As global sea levels continue to rise, coastal woodlands from the Gulf of Mexico to the Chesapeake Bay and elsewhere around the world could also suffer major losses from saltwater intrusion. Many people in the conservation community are rethinking land management approaches and exploring more adaptive strategies, such as facilitating forests' inevitable transition into salt marshes or other coastal landscapes.
For example, in North Carolina the Nature Conservancy is carrying out some adaptive management approaches, such as creating “living shorelines" made from plants, sand and rock to provide natural buffering from storm surges.
A more radical approach would be to introduce marsh plants that are salt-tolerant in threatened zones. This strategy is controversial because it goes against the desire to try to preserve ecosystems exactly as they are.
But if forests are dying anyway, having a salt marsh is a far better outcome than allowing a wetland to be reduced to open water. While open water isn't inherently bad, it does not provide the many ecological benefits that a salt marsh affords. Proactive management may prolong the lifespan of coastal wetlands, enabling them to continue storing carbon, providing habitat, enhancing water quality and protecting productive farm and forest land in coastal regions.
A new study used functional near-infrared spectroscopy (fNIRS) to measure brain activity as inexperienced and experienced soccer players took penalty kicks.
- The new study is the first to use in-the-field imaging technology to measure brain activity as people delivered penalty kicks.
- Participants were asked to kick a total of 15 penalty shots under three different scenarios, each designed to be increasingly stressful.
- Kickers who missed shots showed higher activity in brain areas that were irrelevant to kicking a soccer ball, suggesting they were overthinking.
In a 2019 soccer match, Swansea City was down 1-0 against West Brom late in the first half. A penalty was called against West Brom. Swansea midfielder Bersant Celina was preparing to deliver a penalty kick. He scuttled up to the ball, but his foot only made partial contact, lobbing it weakly to the right.
Was it a simple mistake? Maybe. But there might be deeper explanations for why professional athletes choke under high-pressure situations.
A new study published in Frontiers in Computer Science used functional near-infrared spectroscopy (fNIRS) to analyze the brain activity of inexperienced and experienced soccer players as they missed penalty shots. Although past research has explored why soccer players miss penalty shots, the recent study is the first to do so using in-the-field fNIRS measurement.
The results showed that kickers who choked were activating parts of their brain associated with long-term thinking, self-instruction, and self-reflection. The chokers, in other words, were overthinking it.
The psychology of penalty kicks
Penalty shots offer an interesting case study of how mental pressure affects physical performance. After all, there's a lot at stake, not only because the kick can sometimes render a win or loss, but also because there are sometimes millions of people anxiously watching, some of whom might have a financial interest in the outcome.
That pressure is no joke. For example, research on Men's World Cup penalty shoot-outs has shown that when the score is tied and a goal means an immediate win, players score 92 percent of kicks. But when teams are facing elimination in a shootout, and the kick determines an immediate tie or loss, players only score 60 percent of the time.
"How can it be that football players with a near perfect control over the ball (they can very precisely kick a ball over more than 50 meters) fail to score a penalty kick from only 11 meters?" study co-author Max Slutter, of the University of Twente in the Netherlands, said in a press release.
"Obviously, huge psychological pressure plays a role, but why does this pressure cause a missed penalty? We tried to answer this by measuring the brain activity of football players during the physical execution of a penalty kick."
In the new study, the researchers aimed to answer two key questions about choking under pressure among both experienced and inexperienced players: (1) What is the difference in brain activity between success (scoring) and failure (missing) when taking a penalty kick? (2) What brain activity is associated with performing under pressure during a penalty kick situation?
To find out, the researchers asked ten experienced soccer players and twelve inexperienced players to participate in a penalty-kicking task. The task was divided into three rounds, each of which was designed to be increasingly stressful:
- Round 1 had no goalkeeper and was labeled as a practice round.
- Round 2 had a friendly goalkeeper who wasn't allowed to distract the kicker.
- Round 3 had a competitive goalkeeper who was allowed to distract the kicker, and kickers were also competing for a prize.
Participants kicked five shots in each round. They wore a fNIRS-equipped headset during the task that measured activity in various parts of the brain.
All participants performed worse in the second and third rounds and reported experiencing the most pressure in the third round. Inexperienced players performed worse than experienced players, which might suggest that they were less able to deal with the mental stress.
The locations in which experienced and inexperienced players kicked the ball in each round. Red dots represent missed penalties and green dots represent scored penalties.Slutter et al., Frontiers in Computer Science, 2021.
The neuroscience of choke artists
So, what types of brain activity were associated with missed shots?
The most noticeable result was that kickers missed more shots when they showed higher activity in their prefrontal cortex (PFC), an area of the brain associated with long-term planning. This was especially true among participants who reported higher levels of anxiety. More specifically, experienced soccer players who missed shots showed high activity in the left temporal cortex, which is related to self-instruction and self-reflection.
"By activating the left temporal cortex more, experienced players neglect their automated skills and start to overthink the situation," the researchers wrote. "This increase can be seen as a distracting factor."
Also, when players of all experience levels felt anxious and missed shots, they showed less activity in the motor cortex, which is the brain area most directly associated with kicking a penalty shot.
Don't overthink it
The results suggest that mental pressure can activate parts of the brain that are irrelevant to the task at hand. In general, expert athletes show more efficient brain activity — that is, more activity in relevant areas, and less activity in irrelevant areas — and therefore experience fewer distractions. This is likely one reason why they were more successful at penalties than inexperienced players in high-stress situations.
This principle is described by neural efficiency theory, and it applies not only to athletes but experts in any field. As you gain mastery over something, you can rely more on automatic brain processes rather than deliberate thinking, which can lead to distractions. The authors of the study concluded that their results provide supporting evidence for neural efficiency theory.
Still, as long our experts are human, it seems that high-pressure situations can turn anyone into a choke artist.
A Harvard professor's study discovers the worst year to be alive.
- Harvard professor Michael McCormick argues the worst year to be alive was 536 AD.
- The year was terrible due to cataclysmic eruptions that blocked out the sun and the spread of the plague.
- 536 ushered in the coldest decade in thousands of years and started a century of economic devastation.
The past year has been nothing but the worst in the lives of many people around the globe. A rampaging pandemic, dangerous political instability, weather catastrophes, and a profound change in lifestyle that most have never experienced or imagined.
But was it the worst year ever?
Nope. Not even close. In the eyes of the historian and archaeologist Michael McCormick, the absolute "worst year to be alive" was 536.
Why was 536 so bad? You could certainly argue that 1918, the last year of World War I when the Spanish Flu killed up to 100 million people around the world, was a terrible year by all accounts. 1349 could also be considered on this morbid list as the year when the Black Death wiped out half of Europe, with up to 20 million dead from the plague. Most of the years of World War II could probably lay claim to the "worst year" title as well. But 536 was in a category of its own, argues the historian.
It all began with an eruption...
According to McCormick, Professor of Medieval History at Harvard University, 536 was the precursor year to one of the worst periods of human history. It featured a volcanic eruption early in the year that took place in Iceland, as established by a study of a Swiss glacier carried out by McCormick and the glaciologist Paul Mayewski from the Climate Change Institute of The University of Maine (UM) in Orono.
The ash spewed out by the volcano likely led to a fog that brought an 18-month-long stretch of daytime darkness across Europe, the Middle East, and portions of Asia. As wrote the Byzantine historian Procopius, "For the sun gave forth its light without brightness, like the moon, during the whole year." He also recounted that it looked like the sun was always in eclipse.
Cassiodorus, a Roman politician of that time, wrote that the sun had a "bluish" color, the moon had no luster, and "seasons seem to be all jumbled up together." What's even creepier, he described, "We marvel to see no shadows of our bodies at noon."
...that led to famine...
The dark days also brought a period of coldness, with summer temperatures falling by 1.5° C. to 2.5° C. This started the coldest decade in the past 2300 years, reports Science, leading to the devastation of crops and worldwide hunger.
...and the fall of an empire
In 541, the bubonic plague added considerably to the world's misery. Spreading from the Roman port of Pelusium in Egypt, the so-called Plague of Justinian caused the deaths of up to one half of the population of the eastern Roman Empire. This, in turn, sped up its eventual collapse, writes McCormick.
Between the environmental cataclysms, with massive volcanic eruptions also in 540 and 547, and the devastation brought on by the plague, Europe was in for an economic downturn for nearly all of the next century, until 640 when silver mining gave it a boost.
Was that the worst time in history?
Of course, the absolute worst time in history depends on who you were and where you lived.
Native Americans can easily point to 1520, when smallpox, brought over by the Spanish, killed millions of indigenous people. By 1600, up to 90 percent of the population of the Americas (about 55 million people) was wiped out by various European pathogens.
Like all things, the grisly title of "worst year ever" comes down to historical perspective.
What's the difference between brainwashing and rehabilitation?
- The book and movie, A Clockwork Orange, powerfully asks us to consider the murky lines between rehabilitation, brainwashing, and dehumanization.
- There are a variety of ways, from hormonal treatment to surgical lobotomies, to force a person to be more law abiding, calm, or moral.
- Is a world with less free will but also with less suffering one in which we would want to live?
Alex is a criminal. A violent and sadistic criminal. So, we decide to do something about it. We're going to "rehabilitate" him.
Using a new and exciting "Ludovico" technique, we'll change his brain chemistry to make him an upstanding, moral citizen. Alex will be forced to watch violent movies as his body is pumped with nausea-inducing drugs. After a while, he'll come to associate violence with this horrible sickness. And, after a course of Ludovico, Alex can happily return to society, never again doing an immoral or illegal act. He'll no longer be a danger to himself or anyone else.
This is the story of A Clockwork Orange by Anthony Burgess, and it raises important questions about the nature of moral decisions, free will, and the limits of rehabilitation.
Today's Clockwork Orange
This might seem like unbelievable science fiction, but it might be truer — and nearer — than we think. In 2010, Dr. Molly Crockett did a series of experiments on moral decision-making and serotonin levels. Her results showed that people with more serotonin were less aggressive or confrontational and much more easy-going and forgiving. When we're full of serotonin, we let insults pass, are more empathetic, and are less willing to do harm.
As Fydor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket."
The idea that biology affects moral decisions is obvious. Most of us are more likely to be short-tempered and spiteful if we're tired or hungry, for instance. Conversely, we have the patience of a saint if we just have received some good news, had half a bottle of wine, or had sex.
If our decision-making can be manipulated or determined by our biology, should we not try various interventions to prevent the criminally inclined from harming others?
What is the point of prison? This is itself no easy question, and it's one with a rich philosophical debate. Surely one of the biggest reasons is to protect society by preventing criminals from reoffending. This might be achievable by manipulating a felon's serotonin levels, but why not go even further?
Today, we know enough about the brain to have identified a very particular part of the prefrontal cortex responsible for aggressive behavior. We know that certain abnormalities in the amygdala can result in anti-social behavior and rule breaking. If the purpose of the penal system is to rehabilitate, then why not "edit" these parts of the brain in some way? This could be done in a variety of ways.
Credit: Otis Historical Archives National Museum of Health and Medicine via Flickr / Wikipedia
Electroconvulsive therapy (ECT) is a surprisingly common practice in much of the developed world. Its supporters say that it can help relieve major mental health issues such as depression or bipolar disorder as well as alleviate certain types of seizures. Historically, and controversially, it has been used to "treat" homosexuality and was used to threaten those misbehaving in hospitals in the 1950s (as notoriously depicted in One Flew Over the Cuckoo's Nest). Of course, these early and crude efforts at ECT were damaging, immoral, and often left patients barely able to function as humans. Today, neuroscience and ECT are much more sophisticated. If we could easily "treat" those with aggressive or anti-social behavior, then why not?
Ideally, we might use techniques such as ECT or hormonal supplementation, but failing that, why not go even further? Why not perform a lobotomy? If the purpose of the penal system is to change the felon for the better, we should surely use all the tools at our disposal. With one fairly straightforward surgery to the prefrontal cortex, we could turn a violent, murderous criminal into a docile and law-abiding citizen. Should we do it?
Is free will worth it?
As Burgess, who penned A Clockwork Orange, wrote, "Is a man who chooses to be bad perhaps in some way better than a man who has the good imposed upon him?"
Intuitively, many say yes. Moral decisions must, in some way, be our own. Even if we know that our brains determine our actions, it's still me who controls my brain, no one else. Forcing someone to be good, by molding or changing their brain, is not creating a moral citizen. It's creating a law-abiding automaton. And robots are not humans.
And yet, it begs the question: is "free choice" worth all the evil in the world?
If my being brainwashed or "rehabilitated" means children won't die malnourished or the Holocaust would never happen, then so be it. If lobotomizing or neuro-editing a serial killer will prevent them from killing again, is that not a sacrifice worth making? There's no obvious reason why we should value free will above morality or the right to life. A world without murder and evil — even if it meant a world without free choices for some — might not be such a bad place.
As Fyodor Dostoyevsky wrote in The Brothers Karamazov, if the "entrance fee" for having free will is the horrendous suffering we see all around us, then "I hasten to return my ticket." Free will's not worth it.
Do you think the Ludovico technique from A Clockwork Orange is a great idea? Should we turn people into moral citizens and shape their brains to choose only what is good? Or is free choice more important than all the evil in the world?