Skip to content
Surprising Science

Can we trust studies when humans have a vested interest in the outcome?

Lack of replication is a serious problem in science. So far, no one has an answer.
A pharmacist handles a syringe for the flu vaccine in the consultation room at his dispensary on October 6, 2017 in downtown Bordeaux, south western France. (Photo by Georges Gobet/AFP/Getty Images)
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Though rumor had it that citrus cured sailors of scurvy—crews were aware of this as early as 1497—it was not until James Lancaster’s 1601 voyage to Sumatra that this bit of folk wisdom was put to the test. With four ships under his command, Lancaster decided to try a little experiment: men on one ship received regular lemon juice shots; the other three were dry. 


We know the results. Men on those three ships developed scurvy. More importantly, Lancaster inadvertently created the first scientific trial. Nearly a century-and-a-half later, Scottish physician James Lind is credited with the first clinical trial (also studying scurvy); a few decades onward, in 1784 Benjamin Franklin and Antoine Lavoisier performed the first blind experiment in France to test out Franz Mesmer’s theory of animal magnetism. (It doesn’t exist.)

The double-blind method was introduced so researchers would not influence volunteers by inadvertently (or knowingly) guiding them to make certain claims. For nearly two centuries, this type of clinical study has been the gold standard of scientific research. Whenever you hear the statement, “studies show…,” if not conducted in this manner, it’s not considered valid.

Yet in the last few decades, a number of problems have arisen. As much as we’d like to believe that data are data and humans are merely reporting it, researcher bias has led to numerous cases of selective reporting—only publishing studies that confirm the initial hypothesis. This is especially problematic with pharmaceutical studies, which are often funded by corporations with a vested interest in getting the result they’d like to advertise.

Statistician Theodore Sterling noticed a troubling trend as far back as 1959: 97 percent of psychological studies confirmed the effect they initially set out to prove. This is an unbelievable number because, well, it shouldn’t be believed. The bar for a positive trial was created in 1922 by mathematician Ronald Fisher, who said that a “significant” result is produced by chance less than 5 percent of the time. (In statistics, a 95 percent confidence interval is standard, though a 99 percent confidence interval is more accurate.) Incredibly, Fisher invented this stat not out of confidence, but convenience

[Fisher] picked five per cent as the boundary line, somewhat arbitrarily, because it made pencil and slide-rule calculations easier.

Even more harrowing is the lack of replication. Besides hiding data, as many pharmaceutical companies have been shown to do, no study should be considered foolproof until it’s been repeated by unaffiliated parties. By the time research has been released to the public, however, we tend to take it on its face value. When conflicting evidence is presented, we often reject claims that contradict what we already believe.

One such case is sexual selection and symmetry, the notion that animals (including us) choose mates based on symmetrical features. This notion was implanted in the social consciousness in 1991 after a study on barn swallows speculated that females chose mates males with long, symmetrical feathers, a notion that was quickly extrapolated to include humans with symmetrical facial features. By 1997, this study had been replicated dozens of times, resulting in an overall 80 percent reduction in effect size. Still, the myth persists.

Barn swallows and one night stands are one thing, but a similar phenomenon has occurred with SSRIs. Since the introduction of fluoxetine in 1987, this class of antidepressants has been the go-to for treating depression and anxiety. Yet in the intervening decades, their efficacy has waned even though prescription rates continue to skyrocket. Drugs that were initially intended to be used for a limited period of time are being prescribed for decades. The side effects are killing the people these drugs were designed, or at least marketed, to save. 

Another realm susceptible to dubious claims is acupuncture. As Jonah Lehrer points out,  between the years of 1966 and 1995, 47 out of 47 studies in China, Taiwan, and Japan concluded it to be an effective treatment. All of these nations have long believed in acupuncture’s efficacy. Americans are a bit more skeptical. The 94 trials during that time resulted in a 56 percent efficacy rate. Lehrer continues,

This wide discrepancy suggests that scientists find ways to confirm their preferred hypothesis, disregarding what they don’t want to see. Our beliefs are a form of blindness.

One fix to this blindness would be an open-source database in which intentions and investigations exist side by side. Instead of companies and research institutions with a stake in the results holding back what they don’t want to report or “curve-fitting” what they do, transparency would be built into the system from the outset. This might not alter the decline effect Lehrer mentions, but it would hold researchers accountable for their intentions and results during every step of the process.

Of course, there’s still human belief to contend with. That includes the pre-existing beliefs of researchers, the belief held by CEOs and board members that their data are proprietary, and the innumerable beliefs plastered across the internet on health blogs with no (or questionable) studies being championed as the final word on the subject.

The scientific method is one of the great inventions of both our rational and imaginative mind. Being honest with the evidence is another story.

Stay in touch with Derek on Facebook and Twitter.

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next