Predicting the president: Two ways election forecasts are misunderstood

Everyone wants to predict who will win the 2020 presidential election. Here are 2 misconceptions to bust so people don't proclaim the death of data like they did in 2016.

Map: United States presidential election results by county, 2016

United States presidential election results by county, 2016.

  • There are two common misconceptions that muddy people's understanding of election forecasting, says Eric Siegel: Blaming the prognosticator and predicting candidates versus predicting voters.
  • In 2016, Nate Silver's forecast put about 70% odds on Clinton winning. Despite people's shock at the election results, that forecast was not wrong.
  • As predictions for the 2020 presidential election ramp up, it's important to understand what election forecasting means and to bust the misconceptions that warp our expectations.

When it's a presidential election year, speculation is in the cards. It's the national pastime. Everyone wants to predict who'll win.

But, man, did people mismanage their own expectations leading up to the 2016 presidential election, when Donald Trump defeated Hillary Clinton.

This was due in no small part to the misinterpretation of election forecasts. There are two common misconceptions, and correcting them comes down to the fundamental idea of what a probability is.

In 2016, Nate Silver's forecast put about 70% odds on Clinton winning. Who's Nate? There's no more better known person of prediction in this country, no more famous prognostic quant than former New York Times blogger and political poll aggregator Nate Silver, who had gained notoriety for correctly predicting the outcome of the 2012 presidential election for each individual state.

Presently, his up-to-the-minute forecast of the 2020 Democratic Primary is live, and his forecast of the 2020 general election is forthcoming.

By the way, number crunching serves more than just to forecast presidential elections – it also helps win presidential elections. Click here to read all about it.

Misconception #1: Blaming the prognosticator

Nate Silver

Nate Silver speaks at a panel in New York City.

Photo: Krista Kennell/Patrick McMullan via Getty Images

When Clinton lost in 2016, everyone was like, "OMG, epic fail!" The reasoning was, well, the 70% forecast that she would win had proven to be wrong, so the problem must have been either bad polling data or something about Silver's model, or both.

But no – the forecast wasn't bad! "70%" does not mean Clinton will clearly win. And a 30% chance of Trump winning isn't a long shot at all. Something that happens 30% of the time is really pretty common and normal. And that's what a probability is. It means that, in a situation just like this, it will happen 30 out of 100 times, that is, 3 out of 10 times. Those aren't long odds.

And Clinton's 70% probability is actually closer to a 50/50 toss-up than it is to a 100% "sure thing." When you see "70%," the take-away isn't that Clinton is pretty much a shoe-in. No, the take-away is, "I dunno." Lot's of uncertainty.

I believe many people saw that "70%," and the thought process was like, "70% is a passing grade, so Clinton will definitely pass, so Clinton will definitely win."

Prediction is hard. To be more specific, there are many situations where the outcome is uncertain and we just can't be confident about what to expect. Nate Silver's model looked at the data and said this one was one of those situations. Now, a confident prediction may feel more satisfying. We all want definitive answers. But it's better for you to shrug your shoulders than to express confidence without a firm basis to do so, and it's better for the math to do the same thing.

Press the press to give it a rest

So, I feel kind of bad for Nate Silver. He totally got a bad rap. Most of the other prominent models at large actually put Clinton's chances much higher – between 92% and 99%. Those models exhibited overconfidence. Silver's model didn't strongly commit. It expressed, first and foremost, uncertainty.

Even the Harvard Gazette, in an article that ultimately defended Silver, put it this way: "Even leading statistical analysis site [that's Silver's site] gave Donald Trump a less than 1 in 3 chance of winning. So when he surged to victory... stunned political pundits blamed pollsters and forecasters, proclaiming 'the death of data.'"

It's like the journalist couldn't wrap her head around the fact that "less 1 in 3" – specifically a 30% chance – isn't remote odds. If there were a 30% chance a car would crash, you obviously wouldn't get in the car.

Nate Silver wasn't betting his life on one candidate or the other. His job as a forecaster wasn't to magically predict like a crystal ball. It was to tell you the odds as precisely as possible.

When asked by the same journalist whether he was saying he diverged from the general sentiment that polling had been a "massive failure," Silver said, "Not only am I not on that bandwagon, I think it's pretty irresponsible when people in the mainstream media perpetuate that narrative... We think our general election model was really good. It said there was a pretty good chance of Trump winning... if everyone says 'Trump has no chance' and you use modeling to say 'Hey, look at this more rigorously; he actually has a pretty good chance. Not 50 percent, but 30 percent is pretty good.' To me, that's a highly successful application of modeling."

I even remember hearing him have to talk down his coworkers on his own podcast just before the election, who were talking about Clinton's election as a done deal. It's like nobody understands what "30%" means.

Forecasting isn't futurism

When you're a contestant on the TV quiz show Jeopardy, you only buzz in when you think you know the answer to the question, cause if you get it wrong, you get penalized. So you gauge your own confidence, your own certainty that the answer you have will turn out to be correct. IBM's Watson computer that competed against human champions on that TV show did exactly that. Its predictive model served not only to select the answer to a question, it also provided a gauge of confidence in that answer, which directly informed whether or not the computer buzzed in to answer the question at all.

Here's my big prediction: Futurism will be entirely out of style within 20 years. Ha-ha – get it? My point is, forecasts aren't like futurism. Futurism is the practice of putting your entire reputation down on one confident bet. In contrast, forecasting judiciously allows for uncertainty – it even calls for it, as needed.

Misconception #2: Predicting candidates versus predicting voters

Hillary Clinton and Donald Trump at the first presidential debate of the 2016 presidential election at Hofstra University

Hillary Clinton and Donald Trump at the first presidential debate of the 2016 presidential election at Hofstra University

Photo: Getty Images

The other common election forecast misconception is that the "70%" estimated how much of the votes Clinton would get. That's very much not the same thing as the chances of winning. Poll aggregators like Silver forecast which candidate will win; any forecast they also make about the percent of voters is secondary and distinct from the main probabilistic forecast.

After all, presidential races are much closer than 70/30. 2016 came out at 46% Trump against 48% Clinton, nationwide.

Now, if the data had us expecting one candidate would actually get 70% of the votes nationwide, then the chances of them winning would indeed be close to a sure thing – and a landslide victory at that. In that case, maybe they would actually end up getting less, like 60% – but that's still a likely electoral college win. And the chances are particularly slim that the outcome would land even further away from the expected 70%, down to below 50%, so a loss of the election would be a long shot, perhaps only a 1% chance. So, if you've forecasted a candidate will get 70% of the votes, that may translate to more like a 99% probability of winning.

Transforming polls to probabilities

Anyway, the 70% wasn't the expected proportion of votes. The expected proportion of votes is the input to Nate Silver's model not the output. To be more precise, the model inputs polls, which estimate how many will vote for each candidate, and outputs a forecast, the probability that a given candidate will win.

An election poll does not constitute magical prognostic technology – it is plainly the act of voters explicitly telling you what they're going to do. It's a mini-election dry run.

But there's a craft to aggregating polls, as Silver has mastered so adeptly. His model cleverly weighs large numbers of poll results, based on how many days or weeks old the poll is, the track record of the pollster, and other factors.

So Silver's model turns poll results into a forecasted probability. It maps from one to the other. That's what a predictive model does in general. It takes the data you have as input, and formulaically transforms it to a probability of the outcome or behavior you're seeking to foresee.

Often, model probabilities come closer to 50% than 100%. They're uncertain, like when your Magic Eight Ball says, "The outlook is hazy." It can be hard to sit with and accept a lack of certainty. When the stakes are high, we'd prefer to feel confident, to know how it's going to turn out. Don't let that impulse draw you to a false narrative. Practice not knowing. Shrug your shoulders more. It's good for you.

- - -
Eric Siegel, Ph.D., founder of the Predictive Analytics World and Deep Learning World conference series and executive editor of The Machine Learning Times, makes the how and why of predictive analytics (aka machine learning) understandable and captivating. He is the author of the award-winning book Predictive Analytics: The Power to Predict Who Will Click, Buy, Lie, or Die, the host of The Dr. Data Show web series, a former Columbia University professor, and a renowned speaker, educator, and leader in the field. Follow him at @predictanalytic.

Iron Age discoveries uncovered outside London, including a ‘murder’ victim

A man's skeleton, found facedown with his hands bound, was unearthed near an ancient ceremonial circle during a high speed rail excavation project.

Photo Credit: HS2
Culture & Religion
  • A skeleton representing a man who was tossed face down into a ditch nearly 2,500 years ago with his hands bound in front of his hips was dug up during an excavation outside of London.
  • The discovery was made during a high speed rail project that has been a bonanza for archaeology, as the area is home to more than 60 ancient sites along the planned route.
  • An ornate grave of a high status individual from the Roman period and an ancient ceremonial circle were also discovered during the excavations.
Keep reading Show less

Are we really addicted to technology?

Fear that new technologies are addictive isn't a modern phenomenon.

Credit: Rodion Kutsaev via Unsplash
Technology & Innovation

This article was originally published on our sister site, Freethink, which has partnered with the Build for Tomorrow podcast to go inside new episodes each month. Subscribe here to learn more about the crazy, curious things from history that shaped us, and how we can shape the future.

In many ways, technology has made our lives better. Through smartphones, apps, and social media platforms we can now work more efficiently and connect in ways that would have been unimaginable just decades ago.

But as we've grown to rely on technology for a lot of our professional and personal needs, most of us are asking tough questions about the role technology plays in our own lives. Are we becoming too dependent on technology to the point that it's actually harming us?

In the latest episode of Build for Tomorrow, host and Entrepreneur Editor-in-Chief Jason Feifer takes on the thorny question: is technology addictive?

Popularizing medical language

What makes something addictive rather than just engaging? It's a meaningful distinction because if technology is addictive, the next question could be: are the creators of popular digital technologies, like smartphones and social media apps, intentionally creating things that are addictive? If so, should they be held responsible?

To answer those questions, we've first got to agree on a definition of "addiction." As it turns out, that's not quite as easy as it sounds.

If we don't have a good definition of what we're talking about, then we can't properly help people.


"Over the past few decades, a lot of effort has gone into destigmatizing conversations about mental health, which of course is a very good thing," Feifer explains. It also means that medical language has entered into our vernacular —we're now more comfortable using clinical words outside of a specific diagnosis.

"We've all got that one friend who says, 'Oh, I'm a little bit OCD' or that friend who says, 'Oh, this is my big PTSD moment,'" Liam Satchell, a lecturer in psychology at the University of Winchester and guest on the podcast, says. He's concerned about how the word "addiction" gets tossed around by people with no background in mental health. An increased concern surrounding "tech addiction" isn't actually being driven by concern among psychiatric professionals, he says.

"These sorts of concerns about things like internet use or social media use haven't come from the psychiatric community as much," Satchell says. "They've come from people who are interested in technology first."

The casual use of medical language can lead to confusion about what is actually a mental health concern. We need a reliable standard for recognizing, discussing, and ultimately treating psychological conditions.

"If we don't have a good definition of what we're talking about, then we can't properly help people," Satchell says. That's why, according to Satchell, the psychiatric definition of addiction being based around experiencing distress or significant family, social, or occupational disruption needs to be included in any definition of addiction we may use.

Too much reading causes... heat rashes?

But as Feifer points out in his podcast, both popularizing medical language and the fear that new technologies are addictive aren't totally modern phenomena.

Take, for instance, the concept of "reading mania."

In the 18th Century, an author named J. G. Heinzmann claimed that people who read too many novels could experience something called "reading mania." This condition, Heinzmann explained, could cause many symptoms, including: "weakening of the eyes, heat rashes, gout, arthritis, hemorrhoids, asthma, apoplexy, pulmonary disease, indigestion, blocking of the bowels, nervous disorder, migraines, epilepsy, hypochondria, and melancholy."

"That is all very specific! But really, even the term 'reading mania' is medical," Feifer says.

"Manic episodes are not a joke, folks. But this didn't stop people a century later from applying the same term to wristwatches."

Indeed, an 1889 piece in the Newcastle Weekly Courant declared: "The watch mania, as it is called, is certainly excessive; indeed it becomes rabid."

Similar concerns have echoed throughout history about the radio, telephone, TV, and video games.

"It may sound comical in our modern context, but back then, when those new technologies were the latest distraction, they were probably really engaging. People spent too much time doing them," Feifer says. "And what can we say about that now, having seen it play out over and over and over again? We can say it's common. It's a common behavior. Doesn't mean it's the healthiest one. It's just not a medical problem."

Few today would argue that novels are in-and-of-themselves addictive — regardless of how voraciously you may have consumed your last favorite novel. So, what happened? Were these things ever addictive — and if not, what was happening in these moments of concern?

People are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm.


There's a risk of pathologizing normal behavior, says Joel Billieux, professor of clinical psychology and psychological assessment at the University of Lausanne in Switzerland, and guest on the podcast. He's on a mission to understand how we can suss out what is truly addictive behavior versus what is normal behavior that we're calling addictive.

For Billieux and other professionals, this isn't just a rhetorical game. He uses the example of gaming addiction, which has come under increased scrutiny over the past half-decade. The language used around the subject of gaming addiction will determine how behaviors of potential patients are analyzed — and ultimately what treatment is recommended.

"For a lot of people you can realize that the gaming is actually a coping (mechanism for) social anxiety or trauma or depression," says Billieux.

"Those cases, of course, you will not necessarily target gaming per se. You will target what caused depression. And then as a result, If you succeed, gaming will diminish."

In some instances, a person might legitimately be addicted to gaming or technology, and require the corresponding treatment — but that treatment might be the wrong answer for another person.

"None of this is to discount that for some people, technology is a factor in a mental health problem," says Feifer.

"I am also not discounting that individual people can use technology such as smartphones or social media to a degree where it has a genuine negative impact on their lives. But the point here to understand is that people are complicated, our relationship with new technology is complicated, and addiction is complicated — and our efforts to simplify very complex things, and make generalizations across broad portions of the population, can lead to real harm."

Behavioral addiction is a notoriously complex thing for professionals to diagnose — even more so since the latest edition of the Diagnostic and Statistical Manual of Mental Disorders (DSM-5), the book professionals use to classify mental disorders, introduced a new idea about addiction in 2013.

"The DSM-5 grouped substance addiction with gambling addiction — this is the first time that substance addiction was directly categorized with any kind of behavioral addiction," Feifer says.

"And then, the DSM-5 went a tiny bit further — and proposed that other potentially addictive behaviors require further study."

This might not sound like that big of a deal to laypeople, but its effect was massive in medicine.

"Researchers started launching studies — not to see if a behavior like social media use can be addictive, but rather, to start with the assumption that social media use is addictive, and then to see how many people have the addiction," says Feifer.

Learned helplessness

The assumption that a lot of us are addicted to technology may itself be harming us by undermining our autonomy and belief that we have agency to create change in our own lives. That's what Nir Eyal, author of the books Hooked and Indistractable, calls 'learned helplessness.'

"The price of living in a world with so many good things in it is that sometimes we have to learn these new skills, these new behaviors to moderate our use," Eyal says. "One surefire way to not do anything is to believe you are powerless. That's what learned helplessness is all about."

So if it's not an addiction that most of us are experiencing when we check our phones 90 times a day or are wondering about what our followers are saying on Twitter — then what is it?

"A choice, a willful choice, and perhaps some people would not agree or would criticize your choices. But I think we cannot consider that as something that is pathological in the clinical sense," says Billieux.

Of course, for some people technology can be addictive.

"If something is genuinely interfering with your social or occupational life, and you have no ability to control it, then please seek help," says Feifer.

But for the vast majority of people, thinking about our use of technology as a choice — albeit not always a healthy one — can be the first step to overcoming unwanted habits.

For more, be sure to check out the Build for Tomorrow episode here.

Why the U.S. and Belgium are culture buddies

The Inglehart-Welzel World Cultural map replaces geographic accuracy with closeness in terms of values.

According to the latest version of the Inglehart-Welzel World Cultural Map, Belgium and the United States are now each other's closest neighbors in terms of cultural values.

Credit: World Values Survey, public domain.
Strange Maps
  • This map replaces geography with another type of closeness: cultural values.
  • Although the groups it depicts have familiar names, their shapes are not.
  • The map makes for strange bedfellows: Brazil next to South Africa and Belgium neighboring the U.S.
Keep reading Show less