Skip to content
Who's in the Video
Steven Pinker is an experimental psychologist who conducts research in visual cognition, psycholinguistics, and social relations. He grew up in Montreal and earned his BA from McGill and his PhD[…]

In his explanation of Bayes’ theorem, cognitive psychologist Steven Pinker highlights how this type of reasoning can help us determine the degree of belief we assign to a claim based on available evidence.

Bayes’ theorem takes into account the prior probability of a claim, the likelihood of the evidence given the claim is true, and the commonness of the evidence regardless of the claim’s truth.

While Bayes’ theorem can be useful for making statistical predictions, Pinker cautions that it may not always be appropriate in situations where fairness and other moral considerations are important. Therefore, it’s crucial to consider when Bayes’ theorem is applicable and when it’s not.

STEVEN PINKER: The late great astronomer and science popularizer, Carl Sagan, had a famous saying: "Extraordinary claims require extraordinary evidence." In this, he was echoing a argument by David Hume. Hume said, "Well, what's more likely, that the laws of the Universe as we've always experienced them are wrong, or that some guy misremembered something?" And these are all versions of a kind of reasoning that is called 'Bayesian,' after the Reverend Thomas Bayes. It just means after you've seen all of the evidence, how much should you believe something? And it assumes that you don't just believe something or disbelieve it, you assign a degree of belief. We all want that, we don't wanna be black and white, dichotomous absolutists. We wanna calibrate our degree of belief to the strength of the evidence. Bayes' theorem is how you ought to do that. 

Bayes' theorem, at first glance, looks kinda scary 'cause it's got all of these letters and symbols, but more important, conceptually, it's simple - and at some level, I think we all know it. Posterior probability, that is credence in an idea after looking at the evidence can be estimated by the prior: that is, how much credence did the idea have even before you looked at that evidence? The prior should be based on everything that we know so far, on data gathered in the past, our best-established theories, anything that's relevant to how much you should believe something before you look at the new evidence. Second term is sometimes called the likelihood, and that refers to if the hypothesis is true, how likely is it that you will see the evidence that you are now seeing? You just divide that product - the prior, the likelihood - by the commonness of the data, probability of the data, which is, how often do you expect to see that evidence across the board, whether the idea you're testing is true or false? If something is very common, so for example, lots of things that give people headaches and back pain, you don't diagnose some exotic disease whose symptoms happen to be back pain and headaches just because so many different things can give you headaches and back pain. There's a cliche in medical education: If you hear hoof beats outside the window, don't conclude that it's a zebra, it's much more likely to be a horse. And that's another way of getting people to take into account Bayesian priors. 

There are many realms in life in which if all we cared about was to be optimal statisticians, we should apply Bayes' theorem - just plug the numbers in. But there are things in life other than making the best possible statistical prediction. And sometimes we legitimately say, "Sorry, you can't look at the Bayes rate: rates of criminal violence or rates of success in school." It's true you may not have the same statistical predictive power, but predictive power isn't the only thing in life. You may also want fairness. You may want to not perpetuate a vicious circle where some kinds of people, through disadvantage, might succeed less often, but then if everyone assumes they'll succeed less often, they'll succeed even less often. It could also go too far just by saying, "Well, only 10% of mechanical engineers are women, so there must be a lot of sexism in mechanical engineering programs that cause women to fail." And you might say, "Well, wait a second, what is the Bayes rate of women who wanna be mechanical engineers in the first place?" There, if you're accusing lots of people of sexism without looking at the Bayes rate, you might be making a lot of false accusations. I think we've got to think very carefully about the realms in which, morally, we want not to be Bayesians and the realms in which we do wanna be Bayesian, such as journalism and social science where we just wanna understand the world. It's one of the most touchy and difficult and politically sensitive hot buttons that are out there. And that's a dilemma that faces us with all taboos, including forbidden Bayes rates. 

Still, we can't evade the responsibility of deciding when are Bayes rates permissible, when are they forbidden? What Bayes' theorem just says is that your degree of belief in a hypothesis should be determined by how likely the hypothesis is beforehand, before you even look at the evidence. If the hypothesis is true, what are the odds that you would see the evidence that you are seeing, scaled by how common is that evidence across the board whether the hypothesis is true or false? If you could follow what I just said, you understand Bayes' theorem.

Up Next

Related