Big Think Interview With Alfred Mele
Alfred Mele: Let’s see. Of course I was very young. I was probably 19 when I made the decision. And I was always very interested in difficult puzzles and games especially chess. And there is that intricate aspect to philosophy that attracted me. But I was also very interested in human behavior even as a young person. And the first course in philosophy that really sucked me in was, of course, in ancient philosophy. And so, I read Plato and Aristotle and I had never read anything like that. In high school, I went to a Catholic school. I was a football player and I just tried to scrape by. But college I found incredibly exciting. So I think it was that Plato and Aristotle had these views about everything; the universe how we fit into it, what motivates us, why we do what we do. And that’s what sucked me in. And I think I was 19. I think I was a sophomore when I decided.
Question: Which philosopher’s worldview most mirrors your own?
Alfred Mele: Well, definitely Aristotle. I wrote my dissertation on Aristotle’s theory of human motivation. So, it’s a theory about why we do what we do really. And eventually I moved away from ancient philosophy. I started reading classical Greek and writing commentaries on Aristotle, that’s what Aristotle scholars, do. And I did that for four or five years after my dissertation. But, the issues that he addresses were the things that really interested me. So, eventually I had ideas of my own about this and I started writing too many books and too many articles.
I think the thing that really hooked me on Aristotle was his view about what is called, “Weakness of will.” And weakness of will is something that shows up when you judged that, on the whole it would be best to do a certain thing, but you don’t do it and you freely don’t do it. So, an example I used for students is they judge tonight that on the whole, it would be best to stay in and study and better to do that than to go to a party, and they’ve been invited to a party. But the time for the party comes closer, a friend comes by with a 12-pack, say, and says, “Let’s go.” And the student thinks, “Yeah, I’m going to do it. I shouldn’t, but I’ll do it.” And Aristotle had a view about why this happened, but it wasn’t a very developed view. And I thought, well there’s got to be a better answer than that. It really doesn’t matter for now what his answer was.
And so I started reading a lot of social psychology, motivational psychology and that sort of thing and just thinking things through. And I came up with a view of my own that’s empirically well supported, I think. And this is what most of my first book, “Irrationality,” was about, this kind of behavior.
So, one way to think about what’s going on is, we want things, and then the things we want have two different features. There’s the pull, how strongly they pull us, but there’s also our ranking, or assessment on some kind of value scale of how good, or bad they are. And so, the student is ranking studying higher than going to the party because he can see the long-term benefits of studying and also the possible costs of not. But the party, for obvious reasons has a greater motivational pull on him, it’s more attractive. And so, if he doesn’t do anything about it what’s going to happen is, he’s going to be pulled to the party against his better judgment. It’s not all that complicated and behavior like this is intentional and I think free too.
Question: Can humans act against their own better judgment?
Alfred Mele: Plato, or at least Socrates, who’s views Plato expressed, had an opinion about this and the idea was that as the tempting option become closer in time, more available, more readily available, what happens is, you switch your judgment so that at the last second, the student would always judge that really, it’s best to go to the party.
Now, I, myself, don’t think that happens partly on the basis of personal experience, but also partly, well there’s this thing called experimental philosophy now where instead of relying on our own personal intuitions on how things happen, we actually go out and do surveys of ordinary people and of course, the ordinary people are usually undergraduates. But undergraduates are pretty ordinary, nice people, but you know, just lay people. And one thing I did was a survey of, I think about 90 undergraduates recently. And I said, “Does this ever happen to you? You judge that it would be best to do a certain thing, and best from your own point of view too, not the point of view of your peers or parents, or whatever. And then still believing that you should do this thing, you do something else instead. Does it every happen?” And I had a, what’s called, a Libert Scale that goes from one to seven, and it was strongly disagree at one end – I mean, strongly agree at the one end, at the one, and strongly disagree at the seven. And the mean rating was, as I recall, 1.32. So, almost all of them you know agreed that they do it sometimes. Now, they could be wrong. You know, they could be fooling themselves. But I suspect they’re right. And if you think about your own case and I think about my own case, sometimes I am convinced that I shouldn’t do a certain thing and I just think, “What the hell. I’ll do it, I shouldn’t, but I will.” But it’s never anything really bad, but it might be something like smoking a cigarette. I’m trying to quit right now because I’ve had dental surgery. But New Year’s Eve I smoked a cigarette, and I thought I shouldn’t. So, yeah, I think it happens.
And why should we think that it doesn’t happen? Well, if we thought that what we’re most strongly motivated to do always lines up with what we judged best, then since we always do what we are most strongly motivated to do, we would think that when we do it we’re judging it best. But there’s just too much evidence against it.
Question: Can the mind ever really deceive itself, or does it choose what to believe or disbelieve?
Alfred Mele: What might make self-deception impossible is a certain model of it, and it’s a traditional model. And the model is a two-person intentional deception model. So, if I’m going to deceive you into believing something, I’ve got to know that’s it’s false and come up with a strategy for getting you to believe that it’s true. And the normal strategy is lying, and then you trust me, let’s say. So, if you use that model for self-deception, then in the same head, you’ve got knowing what’s true, the intention to get yourself to believing that it’s false, and you’ve got some kind of strategy for doing it. Now, that’s very puzzling, or paradoxical. How are you going to pull it off? It’s as though I said, “Look, I’m going to deceive you now into believing that I drive a Range Rover, and this is how I’m going to do it. I’m going to put a picture of me next to a Range Rover out of my wallet and show it to you and you’re going to believe it’s true, but really it’s false.” Well, is that going to work? No. No way, because you know what I’m up to, right?
So, if you put all this in one head, then it looks like, well the person knows what he’s up to, so he can’t possibly succeed. So, it’s paradoxical. And also, the person would have to believe at the same time the truth, and it’s opposite. They both have to be there in the same place.
So, one thing I do is to reject this two-person model of self-deception and I have a different kind of model. And the way to think about it is self-deception is motivationally biased false belief. Now, how might this happen? Well, first some examples. There was a survey done of some professors in the 90’s, and professors were asked, how good are you? Rate yourself relative to other professors on a 100 point scale. And 96 percent of the professors rated themselves above average, with respect to other professors. But of course, it can’t be, and it’s an amazing figure. That’s just one example.
There was a study done in conjunction with the SAT, also in the ‘90’s if I recall. And students were asked all kinds of things. One thing was, how good are you at getting along with others? And there was a scale you rated yourself. All of them rated themselves above average and 25 percent rated themselves in the top 1 percent in ability to get along with others. And of course, you can only have 1 percent in there, you can’t have 25 percent.
So, what’s going on is that people tend to overestimate themselves on good things. This doesn’t happen in all people. There’s a phenomenon called “depressive realism.” The people who are the most accurate about themselves are depressed people. And one thing we don’t know for sure is whether depression causes the accuracy, or the accuracy causes the depression. It could be the second way.
So people have evidence coming in and evidence that points toward the truth of propositions of what they’d like to be true tends to be for salient for these people, for me too, for all of us. And so, it has a greater grip on what we believe than it really ought to have given justice evidential merit.
There are other examples, none statistical ones, but just things that happen like parents might believe that their kids, maybe young teenagers, are not using drugs. But, the neighbors and other people presented with the same evidence don’t believe that. They believe that these parents’ kids are using drugs and somehow the evidence doesn’t get treated properly by these parents. One thing that happens is that when thinking about something makes them uncomfortable, they tend to stop thinking about it so they don’t absorb the negative evidence or give it as much weight as it deserves. And when thinking about whether little Johnny is using drugs, images of innocent Johnny out playing in the sandbox with his toy trucks might come to mind and they absorb attention, and then what you’re thinking is, “Oh Jeez, a kid like this couldn’t be using drugs.”
So, little things like that add up to unwarranted beliefs and usually the biased things is motivated by what you’d like to be true. And I think that’s what self-deception is. It’s nothing really exotic; it’s a very ordinary thing. And I don’t think that self-deception is always bad either. It’s probably good to overestimate yourself to some degree, a little bit, on a variety of points because I think it does give you a little more confidence, enables you to function better and so on. Of course, you can’t be telling yourself, “Hey, this is what I’m doing,” because then it won’t work.
Question: Do you see human beings as fundamentally rational creatures?
Alfred Mele: Now, that is a good question because there are these two different senses of rationality and sometimes they get conflated and then within each sense there are subdivisions. But, think about rocks. Are they rational or irrational? Well, they’re neither, right? So, there’s rational as opposed to non-rational and so, I’m rational a rock isn’t. And to be rational in that sense, you just have to be able to understand, think, reason, come to conclusions, you know, things like that. But then there’s also rational as opposed to irrational. Now, people are fundamentally rational in the rational as opposed to the irrational sense. In the rational as opposed to irrational sense, I think so too, because if we were to try to imagine somebody that was utterly irrational, how would we interpret that person or understand his behavior? It looks like there’s got to be some kind of pattern to it in order for us to make some kind of assessment as to how rational this person is. So, yeah, I think rationality is widespread, and that’s a good thing. And irrationality is falling short in rationality.
Now, some people like to measure irrationally objectively, from an external point of view. I always feel awkward about doing that because I don’t know individuals inside and out. So, I like to measure it from a subjective point of view that is from the individual’s own point of view. So, practical irrationality in this sense of rationality would be a matter of believing that a certain thing is best to do from your own point of view and not doing it. People say that happens to them. That’s irrational. Also, people might accept certain modes of reasoning as legitimate and then some times reason in ways that violate those modes as in self-deception where you and I think, well if the best way to reason about what’s true is to reason objectively and not to be biased by one’s desires and emotions. But, sometime we might reason in a way that is biased by our desires and emotions. And so that would be subjectively irrational too. So, yeah, there’s a lot of subjective irrationality, but I think by and large, people are rational.
Now, if you measure rationally objectively, you might come to a different conclusion. But then what you have to do is to have your own view about what is really rational to do, independently of people’s preferences and the like. I can’t see myself doing that.
Question: Is the subjective versus objective irrationality contributing to the phenomenon where people don’t act in their best financial interest?
Alfred Mele: Yeah, I think it is related. If it were just a game, I guess buying and selling and so on could be seen as a kind of game. If you know people’s preferences and you know probabilities, then you can deduce what the right option is. And of course, ordinary folks aren’t going to be exactly on the ball all the time in that connection. So, people will make unwise decisions and sometimes they’ll make radically unwise decisions. And often that happens because they’re influenced by the salience of the evidence as opposed to its significance or importance. I mean, here’s an example. So, why do car advertisers, instead of talking about all the properties of the cars, show really attractive people driving them and then really attractive people looking at the drivers? Well, because they figure that attracts people’s attention, it increases the likelihood that they’ll buy this car. And people are moved by things like that. And they shouldn’t be, they should be moved by the objective data. It’s a little bit because it’s so much more boring to look at the data; it’s a little bit harder for people to do that. But this doesn’t mean that there’s some kind of fundamental defect in people, it’s just that maybe they don’t care enough about making the best decision to pay close attention to the data.
Question: Does an extreme of self-deception ever become mentally unhealthy?
Alfred Mele: So, if you deceive yourself into thinking that drinking and driving, or drinking over the legal limit and driving is okay, you can be in serious trouble. What happens to people, I think, is people know they shouldn’t drink over the legal limit and drive, and sometimes people will even decide, “I’ll never do it.” But then they drove to the bar and they had one more beer than they thought they would, and their car is there. And they’re thinking, “Well, if I don’t drive home, then I have to leave my car here, and then somehow I have to get it in the morning; take a taxi home and come on back and get it. That would be very inconvenient.” Then they think, “Well, you know, I’ve done this before and lots of people drive when they’re over the legal limit and I only live three or four miles from here, so I’ll make it safely. What the heck,” and they drive. And they might make it home safely that time, but then there are all these other times when they might do it again, and not. So, that kind of self-deception is dangerous.
Deceiving yourself into believing that you are significantly better than you are at dangerous things, well we’ll stick with driving, like race car driving, could be very dangerous. Or, deceiving yourself into thinking that smoking isn’t all that risky; you can keep up the habit. That’s dangerous too. There are lots of dangerous cases of self-deception.
Question: Do human beings have free will?
Alfred Mele: Yes. Yes they do. But it turns out that not everybody understands the expression “free will” in the same way. And there are lots of different ways of understanding it. Unfortunately, that makes it hard to just say, “Yes, this is true that isn’t.” One thing philosopher’s spend a lot of time doing is trying to sort out the possible meanings of an expression like “free will,” and the history on the literature on free will is a couple of thousand years old. So, when I talked to the general public, one thing I say about free will is, you can think about it on a sort of gas station model, service station model, So, when you go to the gas station and get regular gas, or the mid grade gas, or the premium. And maybe we could simplify things by thinking of like regular free will. Well, regular free will would be the sort of thing that is presupposed in courts of law when somebody is judged guilty of an offense. So, just that you understood what you were doing, you’re sane and rational, and nobody was forcing, or compelling you to do it, and you didn’t have any medical condition that forced or compelled you to do it. That would be enough to be acting freely. Now, that’s regular free will.
Yeah, okay, so now how do we understand this being able to do otherwise, everything being the same up until that moment? And by everything, I mean the entire history of the universe and all of the laws of nature. So, one way to picture this ability to do otherwise is as follows. If I could have done otherwise at a given moment, then there’s another possible universe. You don’t have to suppose that this universe actually exists. Another scenario where the entire universe is the same up until that moment, and even so, I do something else instead. So maybe what I did was decide to call a taxi, but at that very moment, everything being the same up until then, I could have decided to take a subway instead, and then started heading down the stairs.
Okay. So, some people require that kind of ability for free will. Now, if we’re going to have it, then the brain has to work in such a way that everything being the same up until a given point in time, although I did one thing, I decided to call a taxi. I could have decided to take the subway. And we don’t have good evidence that the brain does work that way, but also, we don’t have good evidence that it doesn’t. Right? So, this is a question that is empirically open. And it could turn out that the brain doesn’t work this way, and then if it doesn’t, then we’re not going to have free will at this mid grade level, but we could still have regular free will.
So, I’m convinced we have regular free will, the mid grade thing, I’m not convinced we have because we don’t have the empirical evidence that we need. But we don’t have it either way.
Question: What is the main experiment that’s driven this kind of free will?
Alfred Mele: So these were originally done starting in the early ‘80’s. They are still being done today. The technology today is better, but it’s the same kind of experiment. What you have are subjects seated in a chair like the one I’m sitting in, and they have this task. To flex the wrist whenever they want. They’re watching a fast clock. There’s dot on the clock and it makes a complete revolution in less than 3 seconds, and they’re hooked up to two machines. One is measuring EEG, electrical conductivity on the scalp. And the other measures a muscle burst on the wrist, it’s an electromyogram. Okay? So, they’re supposed to flex whenever they want and watch this rapidly revolving spot on the clock and then after they flex, they’re supposed to indicate where the spot was on the clock when they first became aware of their urge, intention, decision, to flex. And they indicate it by moving a cursor to that spot on the clock. Okay, is that clear?
All right. Now, when these subjects are regularly reminded to be spontaneous and not to plan in advance when to flex, what you see is that – well, it’s 500 milliseconds, it’s about half a second before the muscle burst, you get a marked change in electrical conductivity on the scalp. You get this ramping up effect. So, that’s about ½ second before the muscle burst. On average, subject say, they first became aware of this urge, or this decision, or intention, or whatever at about 200 milliseconds before the muscle burst. So, if you average out all those responses that they make by moving the cursor, it’s about 206 milliseconds before the muscle burst.
So, Benjamin Libet was the first one to do these studies. And they’re very interesting studies. And what was innovative is he had a way to measure consciousness because he was timing this conscious experience, he thought, right. So, the claim is, then the brain is deciding over a third of a second before the mind becomes aware of the decision. So, conscious free will isn’t driving this behavior, isn’t generating the flexants. And then Libet generalized and he said, “Well, you know, this is the way it is for all behavior. So, the brain unconsciously makes decisions and the mind becomes aware of them only later.”
Now if you think that in order to be acting freely, say it’s an overt action, an action involving bodily motion, like flexing the wrist, a conscious decision has to be causing the behavior and you’re thinking that doesn’t happen, then you’re thinking you never act freely. And so, there is no free will.
Question: What are the mistakes?
Alfred Mele: Okay, it is an interesting result, but what does it really show? Do we know that it’s decisions that are being made at -550 milliseconds, that is about half a second before the muscle burst, as opposed to something else? Well, one thing it could be instead of decisions is a causal process is up and running that increases the probability of a subsequent flexing, but doesn’t raise it to one. So, what you might have then at -550 is a potential cause of a subsequent flexing. And the decision might be made later than -500 or -550. It might even be made around -200, when people say they think they made it. So, that’s one problem, we can’t really correlate this early spike with the decision at that time. And in fact, the way the study is done is what triggered the computer to make a record of the preceding second, or more, of brain activity was the muscle burst. So, there is a muscle burst, and that triggers the computer, okay, make a record of this preceding second of brain activity. But if you use that methodology then you never looked for cases where you get this spike about half a second before, call it zero time, but no muscle motion. You don’t because it’s the muscle motion that triggers the computer to make a record of the preceding activity.
So that’s one problem. And one thing you might wonder too is, so how long does it take a decision to flex your wrist now to generate a muscle burst, or a wrist flexing. And there’s a way to get indirect evidence about that. You could give subjects a reaction time test. So now they wouldn’t be making the decision, but they would be responding to a queue with an intention. So the task might be flexing your wrist whenever that clock changes color from red to green. Okay? And they could be watching the clock too. And it turns out that reaction time studies have been done with a Libet clock. And a mean reaction time, in one study anyway, was 231 milliseconds. There was just a 231-millisecond gap between the emission of the go signal, which was a sound in that study, and the muscle burst. But if it took an intention or a decision something like 550 milliseconds to cause a muscle burst, this result would be very surprising. I mean, here it’s only roughly 200. And of course after there is the go signal, it’s going to take some time to respond mentally with an intention. It doesn’t have to be a conscious intention, but it’s a causal process, so it takes time.
So, that’s another problem. And then there’s a third problem with these studies and it has to do with the measurement of awareness. So, after they flex again, subject moved the cursor to the spot and say that’s when I first became aware of it.
Question: How does measurement of awareness become a problem?
Alfred Mele: So, it must have been two-and-a-half to three years ago now that I gave a lecture on the neuroscience of free will at the National Institute of Health in a motor control unit. The idea was, I do my lecture and then after that they make me a subject in one of these Libet experiments, which was cool. I was interested. And then after that I got out to dinner. They take me out to dinner, but first I have to be a subject in the experiment. So, I gave my lecture then it was time to do the experiment. I was sitting at the chair, they set up the clock and I knew what my task was. And I wanted to pretend to be a naive subject. I wanted to put myself in the shoes of somebody who might do this and not really knowing what is going on. And so, I thought this is what I’ll do, I’ll sit there and watch the clock and I’ll wait for urges to flex my wrist to pop up in me to become conscious, and then as soon as I have such an urge, I’ll flex and then after I flex I’ll move the cursor to where I thought the spot was on the clock when I first became aware of that urge, or intention, or whatever.
So, I was sitting there a little while and nothing was happening. That is, no urges was coming to mind. And I thought, how did they do this? How do these people do it? And then I thought, I better think of a way to do it because otherwise I’m going to be stuck in this chair and I won’t get any dinner. Right? Dinner was next. So, I thought, this is what I’ll do. I’ll just consciously say “now” to myself silently and treat that as an indicator of an urge or a decision, and then I’ll flex and then after I flex, I’ll report where the spot was on the clock when I said “now” to myself.
Okay. So, I had to remember to do this then. Say “now,” flex as soon as possible after I say “now,” and then do the reporting. And at first the neuroscientist said I was flexing in too wimpy a way, so I had to remember to flex hard too. So, okay. I did all of that. And these experiments subjects had to do at least 40 times to get data you could actually read and use. So, I did it about 40 times. And one thing I discovered that although I could pinpoint the spot on the clock to maybe a range of the clock, I don’t know, 20%, 25% of the clock. I couldn’t pinpoint it to an exact tick, let’s say. That was one problem. Also, I had something every definite to look for internally. I was looking for the conscious “now” saying, and I know what that’s like. But subjects who are said, who are instructed to look for an urge or an intention, or decision, or whatever, might wonder, “Well, what the heck was that that I was just experiencing? Was I just thinking about doing it, was it an urge?” So, there could be confusion that they would have that I didn’t have.
Question: What is the bottom line of these experiments in terms of where we stand on that free will scale?
Alfred Mele: Well, these experiments are thought to show that there is no free will. And my main point here is, they don’t show that. For three different reasons really. These judgment times are unreliable, so we don’t really know when people first became aware of the urge. And we don’t have good evidence of what happens at -550 milliseconds, about half a second before the muscle burst, is that a decision is made as opposed to a potential cause of a decision is present. And we don’t have evidence that what’s happening half a second before the muscle burst is sufficient for subsequent muscle bursts. So, it just leaves free will wide open.
Another thing too is, notice what we’re studying here. We’re studying relatively trivial actions. Wrist flexions or mouse button clickings and decisions to do things now. And it may be that free will mainly isn’t at work in that dimension in our lives, but mainly is at work in broader dimensions when we’re thinking about maybe back to students they’ve been accepted into different graduate schools with different scholarship offers and they are thinking about which one to take. Or, maybe thinking about whether to propose marriage, or not, or whether it’s time finally to get the divorce or what house to buy. You know? It maybe that free will is more involved in things like that then in wrist flexions and the like. And then, now this is not a criticism of the scientists who do this work, but with the technology we have now, if you’re going to be studying something similar to free will, it looks like you’re going to be in this domain and not the domain of choosing graduate schools, buying houses, proposing marriage.
Question: Is luck based on a skewed understanding of statistics, or is there something more to it?
Alfred Mele: Yeah, okay. All right, yeah, so that’s from my book, “Free Will and Luck.” I’ll tell you what I mean by “luck” in that book and then I’ll tell you why it’s important. I don’t know how much I mean by luck is what ordinary people mean by it, but see there again, we can do surveys and we could find out what they mean. So, luck for me has two dimensions and I’m always thinking about lucky events, or lucky happenings. So, a lucky happening for a person, say you or me, is something that one, has an effect on us, an effect on our lives, and two, that we have no control over.
So, just stupid examples. If you were walking down an ordinary street on an ordinary day – well, let’s make it me because this is a bad example. And a piano fell on my head, bad luck for me. Right? It has an effect on my life, it probably ends it, and I had no control over it. Or maybe, you’re walking down the street and you find a hundred dollar bill. Well, good luck for you. So, that seems like the ordinary person’s sense of luck so far. But, the luck that concerns me is the kind that’s involved in what I refer to as the mid level, or mid grade, kind of free will. The kind that requires that when you act, you could have done otherwise, that you don’t act freely unless when you act you could have done otherwise.
So then, what’s going to have to be the case is that you’re brain works in such a way that although in the actual scenario it produces a decision at a particular moment to do a certain thing. In some other possible scenario with everything being the same up until then, it produces a different decision, or it doesn’t produce that decision, you go on thinking. Right? Now, it looks like you exerted all the control you can up until the decision and then there are two different ways it could go that looks like luck. It looks like tossing a coin and it coming up heads. So, the question there really is, it looks like luck of that kind is required for this mid level free will, but does it also preclude it? Does it also block free will because now we’re starting to look a bit random?
So, most philosophers, almost all of them who write about free will, in fact all of them, except me. Defend either a view called “compatiblism,” which is a view according to which free will is compatible with what philosophers call “determinism,” which isn’t what most people think determinism is. I could talk about that. Or, they defend this mid level kind of free will view, or even a more extreme view of free will, but not many. We could call that premium, I guess. Or, they defend the no free will view. So, they all have a definite opinion, a definite view. Whereas, what I do is I say, “Look, I’m not going to choose between the regular free will and the mid level free will. It’s going to be like a restaurant where instead of only having one option, we have two, and then my opponents are going to be the ones who say there is no free will.” Because you see, that’s the idea. So, I’m not committed to the existence of this mid level free will, but I am curious about whether it can actually work out.
So, it looks like the mid level free will requires the kind of randomness, or luck, and then the question is, does that block free will? Well, how should we think about that? Here’s one way, it’s crude, but it’s one way to start. And this is just an analogy, but imagine that we have little roulette wheels in our heads and maybe when we’re very young kids there split 50/50, but as we make decision over time and learn from those decisions and consequences and so on, we can shift the probabilities. The probability distribution in the roulette wheel. So, for example, if you make good decisions and learn that things go well and that increases the probability that you’ll make more good decisions and when you make bad ones and you see that things go badly, you work on yourself to make yourself better so that you’re more likely to make good decisions. In this kind of way you can shift the probabilities on this little roulette wheel and increase the probability that you’ll make mainly good decision in the future.
So what I think is that on this mid level view of free will, if what we have to work with is a kind of glitchy mechanism, a mechanism that has this randomness in it, in order so we can have free will, then what we should do if we’re stuck with it is try to improve it. So, we should try to work on ourselves and over time make ourselves people who are much more likely to do what we judge best than to succumb to temptation. So, I think we can sort of solve this randomness problem, but no by looking at people at particular moments in time, looking at them over long stretches of time and how they can work on themselves and improve themselves.
Question: Is philosophy merging with science?
Alfred Mele: Yeah I think so. It’s definitely a growing trend. There’s always been a connection between philosophy and science. In fact, in the beginning, there was not distinction. So, Aristotle was a philosopher, he was also a biologist, he was also an economist, and in a way he was also a physicist. So, yeah. But it coming closer together again. And I think the reason for this is that scientists are studying things of great interest to philosophers. Like the scientific study of free will. That really got going not until the 1980’s when these Libet experiments that I talked about started up. There’s been a lot of good social/psychological work on things like weakness of will and self-deception for, well, more than decades. And I’ve been interested in that stuff since I was young. It is growing I think, yeah, because there’s more scientific work done now on these philosophical topics. I don’t know what else, it might be that people are thinking, well traditional philosophical methodologies have been around for a long time and it has gotten us to a certain place, and that’s good, but we can get even further, faster by bringing more onboard. You know, scientific results.
I actually do a lot of work with scientists. In fact, I think I can mention this now. I’m about to receive a $4.8 million grant to start a free will project at Florida State University, where I am. And the granting agency is the John Templeton Foundation. Now, most of the money will go out in grants to scientists and others who make proposals on free will; $2.8 million is going to go out to the science of free will. What I’d like to see happen is that we have teams of neuroscientists, social psychologists, and philosophers working together to design free will studies and then write up the papers, analyze the results, and so on. So, for me, this is a really exciting time at the intersection of science and philosophy.
And this thing called experimental philosophy which didn’t really exist until ten years ago, has really taken off. I was up here in New York City last Monday to be in a session on experimental philosophy. There was a big audience and lots of excitement. So, in ten years it’s gone from nothing to something pretty exciting.
Interviewed by Austin Allen
A conversation with the Florida State University professor of philosophy.
Once a week.
Subscribe to our weekly newsletter.
She helped create CRISPR, a gene-editing technology that is changing the way we treat genetic diseases and even how we produce food.
This article was originally published on our sister site, Freethink.
Last year, Jennifer Doudna and Emmanuelle Charpentier became the first all-woman team to win the Nobel Prize in Chemistry for their work developing CRISPR-Cas9, the gene-editing technology. The technology was invented in 2012 — and nine years later, it's truly revolutionizing how we treat genetic diseases and even how we produce food.
CRISPR allows scientists to alter DNA by using proteins that are naturally found in bacteria. They use these proteins, called Cas9, to naturally fend off viruses, destroying the virus' DNA and cutting it out of their genes. CRISPR allows scientists to co-opt this function, redirecting the proteins toward disease-causing mutations in our DNA.
So far, gene-editing technology is showing promise in treating sickle cell disease and genetic blindness — and it could eventually be used to treat all sorts of genetic diseases, from cancer to Huntington's Disease.
The biotech revolution is just getting started — and CRISPR is leading the charge. We talked with Doudna about what we can expect from genetic engineering in the future.
This interview has been lightly edited and condensed for clarity.
Freethink: You've said that your journey to becoming a scientist had humble beginnings — in your teenage bedroom when you discovered The Double Helix by Jim Watson. Back then, there weren't a lot of women scientists — what was your breakthrough moment in realizing you could pursue this as a career?
Dr. Jennifer Doudna: There is a moment that I often think back to from high school in Hilo, Hawaii, when I first heard the word "biochemistry." A researcher from the UH Cancer Center on Oahu came and gave a talk on her work studying cancer cells.
I didn't understand much of her talk, but it still made a huge impact on me. You didn't see professional women scientists in popular culture at the time, and it really opened my eyes to new possibilities. She was very impressive.
I remember thinking right then that I wanted to do what she does, and that's what set me off on the journey that became my career in science.
CRISPR 101: Curing Sickle Cell, Growing Organs, Mosquito Makeovers | Jennifer Doudna | Big Think www.youtube.com
Freethink: The term "CRISPR" is everywhere in the media these days but it's a really complicated tool to describe. What is the one thing that you wish people understood about CRISPR that they usually get wrong?
Dr. Jennifer Doudna: People should know that CRISPR technology has revolutionized scientific research and will make a positive difference to their lives.
Researchers are gaining incredible new understanding of the nature of disease, evolution, and are developing CRISPR-based strategies to tackle our greatest health, food, and sustainability challenges.
Freethink: You previously wrote in Wired that this year, 2021, is going to be a big year for CRISPR. What exciting new developments should we be on the lookout for?
Dr. Jennifer Doudna: Before the COVID-19 pandemic, there were multiple teams around the world, including my lab and colleagues at the Innovative Genomics Institute, working on developing CRISPR-based diagnostics.
"Traits that we could select for using traditional breeding methods, that might take decades, we can now engineer precisely in a much shorter time."
DR. JENNIFER DOUDNA
When the pandemic hit, we pivoted our work to focus these tools on SARS-CoV-2. The benefit of these new diagnostics is that they're fast, cheap, can be done anywhere without the need for a lab, and they can be quickly modified to detect different pathogens. I'm excited about the future of diagnostics, and not just for pandemics.
We'll also be seeing more CRISPR applications in agriculture to help combat hunger, reduce the need for toxic pesticides and fertilizers, fight plant diseases and help crops adapt to a changing climate.
Traits that we could select for using traditional breeding methods, that might take decades, we can now engineer precisely in a much shorter time.
Freethink: Curing genetic diseases isn't a pipedream anymore, but there are still some hurdles to cross before we're able to say for certain that we can do this. What are those hurdles and how close do you think we are to crossing them?
Dr. Jennifer Doudna: There are people today, like Victoria Gray, who have been successfully treated for sickle cell disease. This is just the tip of the iceberg.
There are absolutely still many hurdles. We don't currently have ways to deliver genome-editing enzymes to all types of tissues, but delivery is a hot area of research for this very reason.
We also need to continue improving on the first wave of CRISPR therapies, as well as making them more affordable and accessible.
Freethink: Another big challenge is making this technology widely available to everyone and not just the really wealthy. You've previously said that this challenge starts with the scientists.
Dr. Jennifer Doudna: A sickle cell disease cure that is 100 percent effective but can't be accessed by most of the people in need is not really a full cure.
This is one of the insights that led me to found the Innovative Genomics Institute back in 2014. It's not enough to develop a therapy, prove that it works, and move on. You have to develop a therapy that actually meets the real-world need.
Too often, scientists don't fully incorporate issues of equity and accessibility into their research, and the incentives of the pharmaceutical industry tend to run in the opposite direction. If the world needs affordable therapy, you have to work toward that goal from the beginning.
Freethink: You've expressed some concern about the ethics of using CRISPR. Do you think there is a meaningful difference between enhancing human abilities — for example, using gene therapy to become stronger or more intelligent — versus correcting deficiencies, like Type 1 diabetes or Huntington's?
Dr. Jennifer Doudna: There is a meaningful distinction between enhancement and treatment, but that doesn't mean that the line is always clear. It isn't.
There's always a gray area when it comes to complex ethical issues like this, and our thinking on this is undoubtedly going to evolve over time.
What we need is to find an appropriate balance between preventing misuse and promoting beneficial innovation.
Freethink: What if it turns out that being physically stronger helps you live a longer life — if that's the case, are there some ways of improving health that we should simply rule out?
Dr. Jennifer Doudna: The concept of improving the "healthspan" of individuals is an area of considerable interest. Eliminating neurodegenerative disease will not only massively reduce suffering around the world, but it will also meaningfully increase the healthy years for millions of individuals.
"There is a meaningful distinction between enhancement and treatment, but that doesn't mean that the line is always clear. It isn't."
DR. JENNIFER DOUDNA
There will also be knock-on effects, such as increased economic output, but also increased impact on the planet.
When you think about increasing lifespans just so certain people can live longer, then not only do those knock-on effects become more central, you also have to ask who is benefiting and who isn't? Is it possible to develop this technology so the benefits are shared equitably? Is it environmentally sustainable to go down this road?
Freethink: Where do you see it going from here?
Dr. Jennifer Doudna: The bio revolution will allow us to create breakthroughs in treating not just a few but whole classes of previously unaddressed genetic diseases.
We're also likely to see genome editing play a role not just in climate adaptation, but in climate change solutions as well. There will be challenges along the way both expected and unexpected, but also great leaps in progress and benefits that will move society forward. It's an exciting time to be a scientist.
Freethink: If you had to guess, what is the first disease you think we are most likely to cure, in the real world, with CRISPR?
Dr. Jennifer Doudna: Because of the progress that has already been made, sickle cell disease and beta-thalassemia are likely to be the first diseases with a CRISPR cure, but we're closely following the developments of other CRISPR clinical trials for types of cancer, a form of congenital blindness, chronic infection, and some rare genetic disorders.
The pace of clinical trials is picking up, and the list will be longer next year.
A school lesson leads to more precise measurements of the extinct megalodon shark, one of the largest fish ever.
- A new method estimates the ancient megalodon shark was as long as 65 feet.
- The megalodon was one of the largest fish that ever lived.
- The new model uses the width of shark teeth to estimate its overall size.
A Florida student figured out a way to more accurately measure the size of one of the largest fish that ever lived – the extinct megalodon shark – and found that it was even larger than previously estimated.
The megalodon (officially named Otodus megalodon, which means "Big Tooth") lived between 3.6 and 23 million years ago and was thought to be about 34 feet long on average, reaching the maximum length of 60 feet. Now a new study puts that number at up to 65 feet (20 meters).
Homework assignment leads to a discovery
The study, published in Palaeontologia Electronica, used new equations extrapolated from the width of megalodon's teeth to make the improved estimates. The paper's lead author, Victor Perez, developed the revised methodology while he was a doctoral student at the Florida Museum of Natural History. He got the idea while teaching students, noticing a range of discrepancies in the results they were getting.
Students were supposed to calculate the size of megalodon based on the ancient fish's similarities to the modern great white shark. They utilized the commonly accepted method of linking the height of a shark's tooth to its total body length. As the press release from the Florida Museum of Natural History expounds, this method involves locating the anatomical position of a tooth in the shark's jaw, measuring the tooth "from the tip of the crown to the line where root and crown meet," and using that number in an appropriate equation.
But while carrying out calculations in this way, some of Perez's students thought the shark would have been just 40 feet long, while others were calculating 148 feet. Teeth located toward the back of the mouth were yielding the largest estimates.
"I was going around, checking, like, did you use the wrong equation? Did you forget to convert your units?" said Perez, currently the assistant curator of paleontology at the Calvert Marine Museum in Maryland. "But it very quickly became clear that it was not the students that had made the error. It was simply that the equations were not as accurate as we had predicted."
Found in North Carolina, these 46 fossils are the most complete set of megalodon teeth ever excavated.Credit: Jeff Gage/Florida Museum
The new approach
Perez's math exercise demonstrated that the equations in use since 2002 were generating different size estimates for the same shark based on which tooth was being measured. Because megalodon teeth are most often found as standalone fossils, Perez focused on a nearly complete set of teeth donated by a fossil collector to design a new approach.
Perez also had help from Teddy Badaut, an avocational paleontologist in France, who suggested using tooth width instead of height, which would be proportional to the length of its body. Another collaborator on the revised method was Ronny Maik Leder, then a postdoctoral researcher at the Florida Museum, who aided in the development of the new set of equations.
The research team analyzed the widths of fossil teeth that came from 11 individual sharks of five species, which included megalodon and modern great white sharks, and created a model that connects how wide a tooth was to the size of the jaw for each species.
"I was quite surprised that indeed no one had thought of this before," shared Leder, who is now director of the Natural History Museum in Leipzig, Germany. "The simple beauty of this method must have been too obvious to be seen. Our model was much more stable than previous approaches. This collaboration was a wonderful example of why working with amateur and hobby paleontologists is so important."
Why use teeth?
In general, almost nothing of the super-shark survived to this day, other than a few vertebrae and a large number of big teeth. The megalodon's skeleton was made of lightweight cartilage that decomposed after death. But teeth, with enamel that preserves very well, are "probably the most structurally stable thing in living organisms," Perez said. Considering that megalodons lost thousands of teeth during a lifetime, these are the best resources we have in trying to figure out information about these long-gone giants.
Researchers suggest megalodon's large jaws were very thick, made for grabbing prey and breaking its bones, exerting a bite force of up to 108,500 to 182,200 newtons.
Megalodon tooth compared to two great white shark teeth. Credit: Brocken Inaglory / Wikimedia.
Limitations of the new model
While the new model is better than previous methods, it's still far from perfect in precisely figuring out the sizes of animals which lived so long ago and left behind few if any full remains. Because individual sharks come in a variety of sizes, Perez warned that even their new estimates have an error range of about 10 feet when it comes to the largest animals.
Other ambiguities may affect the results, such as the width of the megalodon's jaw and the size of the gaps between its teeth, neither of which are accurately known. "There's still more that could be done, but that would probably require finding a complete skeleton at this point," Perez pointed out.
How did the megalodon go extinct?
Environmental changes that led to fluctuations in sea levels and disturbed ecosystems in the oceans likely led to the demise of these enormous ancient sharks. They were just too big to be sustained by diminishing food resources, says the ReefQuest Centre for Shark Research.
A 2018 study suggested that a supernova 2.6 million years ago hit Earth's atmosphere with so much cosmic energy that it resulted in climate change. The cosmic rays that included particles called muons might have caused a mass extinction of giant ocean animals ("the megafauna") that included the megalodon by causing mutations and cancer.
Scientists, led by Adrian Melott, professor emeritus of physics and astronomy at the University of Kansas, estimated that "the cancer rate would go up about 50 percent for something the size of a human — and the bigger you are, the worse it is. For an elephant or a whale, the radiation dose goes way up," as he explained in a press release.
We explore the history of blood types and how they are classified to find out what makes the Rh-null type important to science and dangerous for those who live with it.
- Fewer than 50 people worldwide have 'golden blood' — or Rh-null.
- Blood is considered Rh-null if it lacks all of the 61 possible antigens in the Rh system.
- It's also very dangerous to live with this blood type, as so few people have it.
Golden blood sounds like the latest in medical quackery. As in, get a golden blood transfusion to balance your tantric midichlorians and receive a free charcoal ice cream cleanse. Don't let the New-Agey moniker throw you. Golden blood is actually the nickname for Rh-null, the world's rarest blood type.
As Mosaic reports, the type is so rare that only about 43 people have been reported to have it worldwide, and until 1961, when it was first identified in an Aboriginal Australian woman, doctors assumed embryos with Rh-null blood would simply die in utero.
But what makes Rh-null so rare, and why is it so dangerous to live with? To answer that, we'll first have to explore why hematologists classify blood types the way they do.
A (brief) bloody history
Our ancestors understood little about blood. Even the most basic of blood knowledge — blood inside the body is good, blood outside is not ideal, too much blood outside is cause for concern — escaped humanity's grasp for an embarrassing number of centuries.
Absence this knowledge, our ancestors devised less-than-scientific theories as to what blood was, theories that varied wildly across time and culture. To pick just one, the physicians of Shakespeare's day believed blood to be one of four bodily fluids or "humors" (the others being black bile, yellow bile, and phlegm).
Handed down from ancient Greek physicians, humorism stated that these bodily fluids determined someone's personality. Blood was considered hot and moist, resulting in a sanguine temperament. The more blood people had in their systems, the more passionate, charismatic, and impulsive they would be. Teenagers were considered to have a natural abundance of blood, and men had more than women.
Humorism lead to all sorts of poor medical advice. Most famously, Galen of Pergamum used it as the basis for his prescription of bloodletting. Sporting a "when in doubt, let it out" mentality, Galen declared blood the dominant humor, and bloodletting an excellent way to balance the body. Blood's relation to heat also made it a go-to for fever reduction.
While bloodletting remained common until well into the 19th century, William Harvey's discovery of the circulation of blood in 1628 would put medicine on its path to modern hematology.
Soon after Harvey's discovery, the earliest blood transfusions were attempted, but it wasn't until 1665 that first successful transfusion was performed by British physician Richard Lower. Lower's operation was between dogs, and his success prompted physicians like Jean-Baptiste Denis to try to transfuse blood from animals to humans, a process called xenotransfusion. The death of human patients ultimately led to the practice being outlawed.4
The first successful human-to-human transfusion wouldn't be performed until 1818, when British obstetrician James Blundell managed it to treat postpartum hemorrhage. But even with a proven technique in place, in the following decades many blood-transfusion patients continued to die mysteriously.
Enter Austrian physician Karl Landsteiner. In 1901 he began his work to classify blood groups. Exploring the work of Leonard Landois — the physiologist who showed that when the red blood cells of one animal are introduced to a different animal's, they clump together — Landsteiner thought a similar reaction may occur in intra-human transfusions, which would explain why transfusion success was so spotty. In 1909, he classified the A, B, AB, and O blood groups, and for his work he received the 1930 Nobel Prize for Physiology or Medicine.
What causes blood types?
It took us a while to grasp the intricacies of blood, but today, we know that this life-sustaining substance consists of:
- Red blood cells — cells that carry oxygen and remove carbon dioxide throughout the body;
- White blood cells — immune cells that protect the body against infection and foreign agents;
- Platelets — cells that help blood clot; and
- Plasma — a liquid that carries salts and enzymes.6,7
Each component has a part to play in blood's function, but the red blood cells are responsible for our differing blood types. These cells have proteins* covering their surface called antigens, and the presence or absence of particular antigens determines blood type — type A blood has only A antigens, type B only B, type AB both, and type O neither. Red blood cells sport another antigen called the RhD protein. When it is present, a blood type is said to be positive; when it is absent, it is said to be negative. The typical combinations of A, B, and RhD antigens give us the eight common blood types (A+, A-, B+, B-, AB+, AB-, O+, and O-).
Blood antigen proteins play a variety of cellular roles, but recognizing foreign cells in the blood is the most important for this discussion.
Think of antigens as backstage passes to the bloodstream, while our immune system is the doorman. If the immune system recognizes an antigen, it lets the cell pass. If it does not recognize an antigen, it initiates the body's defense systems and destroys the invader. So, a very aggressive doorman.
While our immune systems are thorough, they are not too bright. If a person with type A blood receives a transfusion of type B blood, the immune system won't recognize the new substance as a life-saving necessity. Instead, it will consider the red blood cells invaders and attack. This is why so many people either grew ill or died during transfusions before Landsteiner's brilliant discovery.
This is also why people with O negative blood are considered "universal donors." Since their red blood cells lack A, B, and RhD antigens, immune systems don't have a way to recognize these cells as foreign and so leaves them well enough alone.
How is Rh-null the rarest blood type?
Let's return to golden blood. In truth, the eight common blood types are an oversimplification of how blood types actually work. As Smithsonian.com points out, "[e]ach of these eight types can be subdivided into many distinct varieties," resulting in millions of different blood types, each classified on a multitude of antigens combinations.
Here is where things get tricky. The RhD protein previously mentioned only refers to one of 61 potential proteins in the Rh system. Blood is considered Rh-null if it lacks all of the 61 possible antigens in the Rh system. This not only makes it rare, but this also means it can be accepted by anyone with a rare blood type within the Rh system.
This is why it is considered "golden blood." It is worth its weight in gold.
As Mosaic reports, golden blood is incredibly important to medicine, but also very dangerous to live with. If a Rh-null carrier needs a blood transfusion, they can find it difficult to locate a donor, and blood is notoriously difficult to transport internationally. Rh-null carriers are encouraged to donate blood as insurance for themselves, but with so few donors spread out over the world and limits on how often they can donate, this can also put an altruistic burden on those select few who agree to donate for others.
Some bloody good questions about blood types
A nurse takes blood samples from a pregnant woman at the North Hospital (Hopital Nord) in Marseille, southern France.
Photo by BERTRAND LANGLOIS / AFP
There remain many mysteries regarding blood types. For example, we still don't know why humans evolved the A and B antigens. Some theories point to these antigens as a byproduct of the diseases various populations contacted throughout history. But we can't say for sure.
In this absence of knowledge, various myths and questions have grown around the concept of blood types in the popular consciousness. Here are some of the most common and their answers.
Do blood types affect personality?
Japan's blood type personality theory is a contemporary resurrection of humorism. The idea states that your blood type directly affects your personality, so type A blood carriers are kind and fastidious, while type B carriers are optimistic and do their own thing. However, a 2003 study sampling 180 men and 180 women found no relationship between blood type and personality.
The theory makes for a fun question on a Cosmopolitan quiz, but that's as accurate as it gets.
Should you alter your diet based on your blood type?
Remember Galen of Pergamon? In addition to bloodletting, he also prescribed his patients to eat certain foods depending on which humors needed to be balanced. Wine, for example, was considered a hot and dry drink, so it would be prescribed to treat a cold. In other words, belief that your diet should complement your blood type is yet another holdover of humorism theory.
Created by Peter J. D'Adamo, the Blood Type Diet argues that one's diet should match one's blood type. Type A carriers should eat a meat-free diet of whole grains, legumes, fruits, and vegetables; type B carriers should eat green vegetables, certain meats, and low-fat dairy; and so on.
However, a study from the University of Toronto analyzed the data from 1,455 participants and found no evidence to support the theory. While people can lose weight and become healthier on the diet, it probably has more to do with eating all those leafy greens than blood type.
Are there links between blood types and certain diseases?
There is evidence to suggest that different blood types may increase the risk of certain diseases. One analysis suggested that type O blood decreases the risk of having a stroke or heart attack, while AB blood appears to increase it. With that said, type O carriers have a greater chance of developing peptic ulcers and skin cancer.
None of this is to say that your blood type will foredoom your medical future. Many factors, such as diet and exercise, hold influence over your health and likely to a greater extent than blood type.
What is the most common blood type?
In the United States, the most common blood type is O+. Roughly one in three people sports this type of blood. Of the eight well-known blood types, the least common is AB-. Only one in 167 people in the U.S. have it.
Do animals have blood types?
They most certainly do, but they are not the same as ours. This difference is why those 17th-century patients who thought, "Animal blood, now that's the ticket!" ultimately had their tickets punched. In fact, blood types are distinct between species. Unhelpfully, scientists sometimes use the same nomenclature to describe these different types. Cats, for example, have A and B antigens, but these are not the same A and B antigens found in humans.
Interestingly, xenotransfusion is making a comeback. Scientists are working to genetically engineer the blood of pigs to potentially produce human compatible blood.
Scientists are also looking into creating synthetic blood. If they succeed, they may be able to ease the current blood shortage, while also devising a way to create blood for rare blood type carriers. While this may make golden blood less golden, it would certainly make it easier to live with.* While antigens are typically proteins, they can be other molecules as well, such as polysaccharides.
Milgram's experiment is rightly famous, but does it show what we think it does?
- In the 1960s, Stanley Milgram was sure that good, law-abiding Americans would never be able to follow orders like the Germans in the Holocaust.
- His experiments proved him spectacularly wrong. They showed just how many of us are willing to do evil if only we're told to by an authority figure.
- Yet, parts of the experiment were set up in such a way that we should perhaps conclude something a bit more nuanced.
Holding a clipboard and wearing a lab coat makes you a very powerful person. Add in a lanyard and a confident voice, and you're pretty much in Ocean's Eleven.
Though we believe ourselves to be contrarians, most of us like to obey authority. We answer questions, help with any number of tasks, and obey commands unthinkingly. The vast majority of the time, this is relatively harmless and even requisite for a functioning society, but it can also lead humanity to very dark places.
It could happen here
As we've seen with Asch's experiments on conformity, the post-World War II community was determined to answer how and why the Holocaust took place. Just after the trial of Adolf Eichmann, the American media and public came to see German society as some special kind of monster in just how willing they were to follow orders unthinkingly, at odds with any sense of duty or morality.
Into this came Stanley Milgram. In 1961, Milgram set out a series of experiments to show, in his view, how the German people were more susceptible to authoritarianism than Americans. Milgram believed, as a lot of people did, that the American people would never be capable of such horrendous evil.
The experiment was to be set up in two stages: the first would be on American subjects, to gauge how far they would obey orders; the second would be on Germans, to prove how much they differed. The results stopped Milgram in his tracks.
Shock, shock, horror
Milgram wanted to ensure that his experiment involved as broad and diverse a group of people as possible. In addition to testing the American vs. German mindset, he wanted to see how much age, education, employment, and so on affected a person's willingness to obey orders.
So, the original 40 participants he gathered came from a wide spectrum of society, and each was told that they were to take part in a "memory test." They were to determine the extent to which punishment affects learning and the ability to memorize.
Milgram believed, as a lot of people did, that the American people would never be capable of such horrendous evil.
The experiment involved three people. First, there was the "experimenter," dressed in a lab coat, who gave instructions and prompts. Second, there was an actor who was the "learner." Third, there was the participant who thought that they were acting as the "teacher" in the memory test. The apparent experimental setup was that the learner had to match two words together after being taught them, and whenever they got the answer wrong, the teacher had to administer an electric shock. (The teachers (participants) were shocked as well to let them know what kind of pain the learner would experience.) At first, the shock was set at 15 volts.
The learner (actor) repeatedly made mistakes for each study, and the teacher was told to increase the voltage each time. A tape recorder was played that had the learner (apparently) make sounds as if in pain. As it went on, the learner would plead and beg for the shocks to stop. The teacher was told to increase the amount of voltage as punishment up to a level that was explicitly described as being fatal — not least because the learner was desperately saying he had a heart condition.
The question Milgram wanted to know: how far would his participants go?
Just obeying orders
The results were surprising. Sixty-five percent of the participants were willing to give a 450-volt shock described as lethal, and all administered a 300-volt shock described as traumatically painful. It should be repeated, this occurred despite the learner (actor) begging the teacher (participant) to stop.
In the studies that came after, in a variety of different setups, that 60 percent number came up again and again. They showed that roughly two out of three people would be willing to kill someone if told to by an authority figure. Milgram proved that all genders, ages, and nationalities were depressingly capable of inflicting incredible pain or worse on innocent people.
Major limitations in Milgram's experiment
Milgram took many steps to make sure that his experiment was rigorous and fair. He used the same tape recording of the "learner" screaming, begging, and pleading for all participants. He made sure the experimenters used only the same four prompts each time when the participants were reluctant or wanted to stop. He even made sure that he himself was not present at the experiment, lest he interfere with the procedure (something Phillip Zimbardo did not do).
But, does the Milgram experiment actually prove what we think it does?
First, the experimenters were permitted to remind the participants that they were not responsible for what they did and that the team would take full blame. This, of course, does not make the study any less shocking, but it does perhaps change the scope of the conclusions. Perhaps the experiment reveals more about our ability to surrender responsibility and our willingness simply to become a tool. The conclusion is still pretty depressing, but it shows what we are capable of when offered absolution rather than when simply following orders.
Second, the experiment took place in a single hour, with very little time either to deliberate or talk things over with someone. In most situations, like the Holocaust, the perpetrators had ample time (years) to reflect on their actions, and yet, they still chose to turn up every day. Milgram perhaps highlights only how far we'll go in the heat of the moment.
Finally, the findings do not tell the whole tale. The participants were not engaging in sadistic glee to shock the learner. They all showed signs of serious distress and anxiety, such as nervous laughing fits. Some even had seizures. These were not willing accomplices but participants essentially forced to act a certain way. (Since then, many scientists have argued that Milgram's experiment is hugely unethical.)
The power of authority
That all being said, there's a reason why Milgram's experiment stays with us today. Whether it's evolutionarily or socially drilled into us, it seems that humans are capable of doing terrible things, if only we are told to do so by someone in power — or, at the very least, when we don't feel responsible for the consequences.
One silver lining to Milgram is in how it can inoculate us against such drone-like behavior. It can help us to resist. Simply knowing how far we can be manipulated helps allow us to say, "No."