What is Big Think?  

We are Big Idea Hunters…

We live in a time of information abundance, which far too many of us see as information overload. With the sum total of human knowledge, past and present, at our fingertips, we’re faced with a crisis of attention: which ideas should we engage with, and why? Big Think is an evolving roadmap to the best thinking on the planet — the ideas that can help you think flexibly and act decisively in a multivariate world.

A word about Big Ideas and Themes — The architecture of Big Think

Big ideas are lenses for envisioning the future. Every article and video on bigthink.com and on our learning platforms is based on an emerging “big idea” that is significant, widely relevant, and actionable. We’re sifting the noise for the questions and insights that have the power to change all of our lives, for decades to come. For example, reverse-engineering is a big idea in that the concept is increasingly useful across multiple disciplines, from education to nanotechnology.

Themes are the seven broad umbrellas under which we organize the hundreds of big ideas that populate Big Think. They include New World Order, Earth and Beyond, 21st Century Living, Going Mental, Extreme Biology, Power and Influence, and Inventing the Future.

Big Think Features:

12,000+ Expert Videos

1

Browse videos featuring experts across a wide range of disciplines, from personal health to business leadership to neuroscience.

Watch videos

World Renowned Bloggers

2

Big Think’s contributors offer expert analysis of the big ideas behind the news.

Go to blogs

Big Think Edge

3

Big Think’s Edge learning platform for career mentorship and professional development provides engaging and actionable courses delivered by the people who are shaping our future.

Find out more
Close

Can Science Be Trusted?

May 20, 2013, 9:57 AM
Shutterstock_29224012

Can the scientific literature be trusted? In "Why Most Published Research Findings Are False," Dr. John P. A. Ioannidis, Professor of Medicine and Director of the Stanford Prevention Research Center at Stanford University School of Medicine, basically says no, it cannot.

Far from a kook or an outsider, Dr. Ioannidis is considered one of the world’s foremost experts on the credibility of medical research. His work has been published in top journals (where it is heavily cited) and his efforts were favorably reviewed in a 2010 Atlantic article called "Lies, Damned Lies, and Medical Science."

What kinds of analysis would allow Ioannidis to reach the conclusions he has reached? First know that a huge amount of work has been done in recent years to develop analytical methods for inferring publication bias by a variety of statistical methods. For example, there are now such accepted methodologies as Begg and Mazumdar’s rank correlation test, Egger’s regression, Orwin’s method, "Rosenthal's file drawer," and the now widely used "trim and fill" method of Duval and Tweedie. (Amazingly, at least four major software packages are available to aid detection of publication bias, for researchers doing meta-analyses. Read about it all here.)

funnel graph

There are many factors to consider when looking for publication bias. Take trial size. People who do meta-analysis of scientific literature have wanted, for some time, to have some reasonable way of compensating for the trial size of studies, because if you give small studies (which often have large variances in results) the same consideration as larger, more statistically significant studies, a handful of small studies with large effects sizes can unduly sway a meta-analysis. Aggravating this is the fact that studies showing a negative result are often rejected by journals or simply withheld from publication by their authors. When data goes unpublished, the literature that surfaces can give a distorted view of reality.

If you do a meta-analysis of a large enough number of studies and plot the effect size on the x-axis and standard error on the y-axis (giving rise to a "funnel graph"; see the graphic above, which is for studies involving Cognitive Behavioral Therapy), you expect to find a more-or-less symmetrical distribution of results around some average effect size, or failing that, at least a roughly equal number of data points on each side of the mean. For large studies, the standard error will tend to be small and data points will be high on the graph (because standard error, as usually plotted, goes from high values at the bottom of the y-axis to low numbers at the top; see illustration above). For small studies, the standard error tends (of course) to be large.

What meta-analysis experts have found is that quite often, the higher a study's "standard error" (which is to say, the smaller the study), the more likely the study in question is to report a strongly positive result. So instead of a funnel graph with roughly equal data points on each side (which is what you expect statistically), you get a graph that's visibly lopsided to the right, indicating that publication bias (from non-publication of "bad results") is likely. Otherwise how do you account for the points mysteriously missing from the left side of the graph, in a graph that should (by statistical odds) have roughly equal numbers of points on both sides?

Small studies aren't always the culprits. Some meta-analyses, in some research fields, show funnel-graph asymmetry at the top of the funnel as well as the bottom (in other words, across all study sizes). Data points are missing on the left side of the funnel. Which is hard to account for in a statistical distribution that should show points on both sides, in roughly equal amounts. The only realistic possibility is publication bias.

Then there's the problem of spin-doctoring in studies that are published. This takes various forms, from changing the chosen outcomes-measure after all the data are in (to make the data look better, via a different criterion-of-success; one of many criticisms of the $35 million STAR-D study of depression treatments), "cherry-picking" trials or data points (which should probably be called pea-picking in honor of Gregor Mendel, who pioneered the technique), or the more insidious phenomenon of HARKing, Hypothesizing After the Results are Known, which often occurs with selective citation of concordant studies.

So is Dr. Ioannidis right? Are most published research findings false? I don't think we have to go that far. I think it's reasonable to say that most papers are probably showing real data, obtained legitimately. But we also have to admit there is a substantial phantom literature of unpublished data out there. (This is particularly true in pharmaceutical research, where it's been shown that unflattering studies simply don't get published.) And far too many study authors practice HARKing, cherry-picking, and post hoc outcome-measure swapping.

All of which is to say, it's important to read scientific literature with a skeptical (or at least critical) eye. Fail to do that and you're bound to be led astray, sooner or later.

More from the Big Idea for Friday, July 12 2013

Radical Transparency

A "double blind study" is an experiment in which neither the subjects nor the administrators are aware of the critical aspects of the experiment in question. The idea is┬áto guard against both expe... Read More…

 

Can Science Be Trusted?

Newsletter: Share: