Once a week.
Subscribe to our weekly newsletter.
Believe It Or Not, Most Published Research Findings Are Probably False
Ten years ago, a researcher claimed most published research findings are false; now a decade later, his claim is stronger than ever before. How can this be?
The rise of the Internet has worked wonders for the public's access to science, but this has come with the side effect of a toxic combination of confirmation bias and Google, enabling us to easily find a study to support whatever it is that we already believe, without bothering to so much as look at research that might challenge our position — or the research that supports our position for that matter. I'm certainly not immune myself from credulously accepting research that has later been called into question, even on this blog where I take great effort to take a skeptical approach and highlight false claims arising from research. Could it be the case that studies with incorrect findings are not just rare anomalies, but are actually representative of the majority of published research?
The claim that "most published research findings are false" is something you might reasonably expect to come out of the mouth of the most deluded kind of tin-foil-hat-wearing-conspiracy-theorist. Indeed, this is a statement oft-used by fans of pseudoscience who take the claim at face value, without applying the principles behind it to their own evidence. It is however, a concept that is actually increasingly well understood by scientists. It is the title of a paper written 10 years ago by the legendary Stanford epidemiologist John Ioannidis. The paper, which has become the most widely cited paper ever published in the journal PLoS Medicine, examined how issues currently ingrained in the scientific process combined with the way we currently interpret statistical significance, means that at present, most published findings are likely to be incorrect.
Richard Horton, the editor of The Lancet recently put it only slightly more mildly: "Much of the scientific literature, perhaps half, may simply be untrue." Horton agrees with Ioannidis' reasoning, blaming: "small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance." Horton laments: "Science has taken a turn towards darkness."
Last year UCL pharmacologist and statistician David Colquhoun published a report in the Royal Society's Open Science in which he backed up Ioannidis' case: "If you use p=0.05 to suggest that you have made a discovery, you will be wrong at least 30 percent of the time." That's assuming "the most optimistic view possible" in which every experiment is perfectly designed, with perfectly random allocation, zero bias, no multiple comparisons and publication of all negative findings. Colquhorn concludes: "If, as is often the case, experiments are underpowered, you will be wrong most of the time."
The numbers above are theoretical, but are increasingly being backed up by hard evidence. The rate of findings that have later been found to be wrong or exaggerated has been found to be 30 percent for the top most widely cited randomized, controlled trials in the world's highest-quality medical journals. For non-randomized trials that number rises to an astonishing five out of six.
Over recent years Ioannidis' argument has received support from multiple fields. Three years ago, when drugs company Amgen tried to replicate the "landmark publications" in the field of cancer drug development for a report published in Nature, 47 out of 53 could not be replicated. When Bayer attempted a similar project on drug target studies, 65 percent of the studies could not be replicated.
The problem is being tackled head on in the field of psychology which was shaken by the Stapel affair in which one Dutch researcher fabricated data in over 50 fraudulent papers before being detected. The social sciences received another blow recently when Michael LaCour was accused of fabricating data; the case exposed how studies are routinely published without raw data ever being made available to reviewers.
A massive operation titled The Open Science Collaboration, involving 270 scientists, has so far attempted to replicate 100 psychology experiments, but only succeeded in replicating 39 studies. The project looked at the first articles published in 2008 in the leading psychology journals. The news wasn't entirely bad; the majority of the non-replications were described by the researchers as having at the very least "slightly similar" findings. The resulting paper is currently under review for publication in Science, so we'll have to wait before we get more details. The paper is likely to ruffle some feathers; tempers flared a few years ago when one of the most high-profile findings of recent years, the concept of behavioral priming, was called into question after a series of failed replications.
Whatever way you look at it, these issues are extremely worrying. Understanding the problem is essential in order to know when to take scientific claims seriously. Below I explore some of Ioannidis' key observations:
The smaller the study, the less likely the findings are to be true.
Large studies are expensive, take longer and are less effective at padding out a CV; consequently we see relatively few of them. Small studies however, are far more likely to result in statistically significant results that are in fact a false positive, so they should be treated with caution. This problem is magnified when researchers fail to publish (or journals refuse to publish) negative findings — a problem know as publication bias or the file drawer problem.
The smaller the effect size, the less likely the findings are to be true.
This sounds like it should be obvious, but it is remarkable how much research fails to actually describe the strength of the results, preferring to simply refer to statistical significance alone, which is a far less useful measure. A study's findings can be statistically significant yet have an effect size so weak that in reality the results are completely meaningless. This can be achieved through a process known as P-hacking — which was the method John Bohannon recently used to create a spoof paper finding that chocolate helps you lose weight. P-hacking involves playing with variables until a statistically significant result is achieved. As neuroscientist and blogger Neuroskeptic demonstrated in a recent talk that you can watch online, this is not always the result of foul play, but can actually happen very easily by accident if researchers simply continue conducting research in the same way most currently do now.
The greater the number and the lesser the selection of tested relationships, the less likely the findings are to be true.
This was another key factor that enabled Bohannon to design the study rigged to support the case that eating chocolate helps you lose weight. Bohannon used 18 different types of measurements, relying on the fact that some would likely support his case simply due to chance alone. This practice is currently nearly impossible to detect if researchers fail to disclose all the factors they looked at. This problem is a major factor behind the growing movement of researchers calling for the pre-registration of study methodology.
The greater the financial and other interests and prejudices, the less likely the findings are to be true.
It is always worth checking to see who funded a piece of research. Sticking with our chocolate theme, a recent study that found that chocolate is "scientifically proven to help with fading concentration" was funded by Hershey. On a more serious note, tobacco companies have a long history of funding fraudulent health research over the past century — described by the World Health Organization as "the most astonishing systematic corporate deceit of all time." Today that baton has been handed to oil companies who give money to scientists who deny global warming and fund dozens of front groups with the purpose of sowing doubt about climate change.
The hotter a scientific field, the less likely the findings are to be true.
Though seemingly counter-intuitive, it is particularly common in fast-moving fields of research where many researchers are working on the same problems at the same time, for false findings to be published and quickly debunked. This has been dubbed the Proteus Phenomenon after the Greek god Proteus, who could rapidly change his appearance. The same can be said for research published in the sexiest journals, which only accept the most groundbreaking findings, where the problem has been dubbed the Winner's Curse.
What does this all mean to you?
Thankfully science is self-correcting. Over time, findings are replicated or not replicated and the truth comes out in the wash. This is done through a process of replication involving larger, better controlled trials, meta-analyses where the data from many trials are aggregated and analyzed as a whole, and systematic reviews where studies are assessed based on predetermined criteria — preventing the cherry picking that we're all, whether we like it or not, so naturally inclined to.
Replications, meta-analyses and systematic reviews are by their nature far more useful for portraying an accurate picture of reality than original exploratory research. But systematic reviews rarely make headlines, which is a good reason the news is not the best place to get an informed opinion about matters of science. The problem is unlikely to go away any time soon, so whenever you hear about a new piece of science news, remember the principles above and the simple rule of thumb that studies of studies are far more likely to present a true picture of reality than individual pieces of research.
What does this mean for scientists?
For scientists, the discussion over how to resolve the problem is rapidly heating up with calls for big changes to how researchers register, conduct, and publish research and a growing chorus from hundreds of global scientific organizations demanding that all clinical trials are published. Perhaps most important and most difficult to change, is the structure of perverse incentives that places intense pressure on scientists to produce positive results while actively encouraging them to quietly sit on negative ones.
Inventions with revolutionary potential made by a mysterious aerospace engineer for the U.S. Navy come to light.
- U.S. Navy holds patents for enigmatic inventions by aerospace engineer Dr. Salvatore Pais.
- Pais came up with technology that can "engineer" reality, devising an ultrafast craft, a fusion reactor, and more.
- While mostly theoretical at this point, the inventions could transform energy, space, and military sectors.
The U.S. Navy controls patents for some futuristic and outlandish technologies, some of which, dubbed "the UFO patents," came to light recently. Of particular note are inventions by the somewhat mysterious Dr. Salvatore Cezar Pais, whose tech claims to be able to "engineer reality." His slate of highly-ambitious, borderline sci-fi designs meant for use by the U.S. government range from gravitational wave generators and compact fusion reactors to next-gen hybrid aerospace-underwater crafts with revolutionary propulsion systems, and beyond.
Of course, the existence of patents does not mean these technologies have actually been created, but there is evidence that some demonstrations of operability have been successfully carried out. As investigated and reported by The War Zone, a possible reason why some of the patents may have been taken on by the Navy is that the Chinese military may also be developing similar advanced gadgets.
Among Dr. Pais's patents are designs, approved in 2018, for an aerospace-underwater craft of incredible speed and maneuverability. This cone-shaped vehicle can potentially fly just as well anywhere it may be, whether air, water or space, without leaving any heat signatures. It can achieve this by creating a quantum vacuum around itself with a very dense polarized energy field. This vacuum would allow it to repel any molecule the craft comes in contact with, no matter the medium. Manipulating "quantum field fluctuations in the local vacuum energy state," would help reduce the craft's inertia. The polarized vacuum would dramatically decrease any elemental resistance and lead to "extreme speeds," claims the paper.
Not only that, if the vacuum-creating technology can be engineered, we'd also be able to "engineer the fabric of our reality at the most fundamental level," states the patent. This would lead to major advancements in aerospace propulsion and generating power. Not to mention other reality-changing outcomes that come to mind.
Among Pais's other patents are inventions that stem from similar thinking, outlining pieces of technology necessary to make his creations come to fruition. His paper presented in 2019, titled "Room Temperature Superconducting System for Use on a Hybrid Aerospace Undersea Craft," proposes a system that can achieve superconductivity at room temperatures. This would become "a highly disruptive technology, capable of a total paradigm change in Science and Technology," conveys Pais.
High frequency gravitational wave generator.
Credit: Dr. Salvatore Pais
Another invention devised by Pais is an electromagnetic field generator that could generate "an impenetrable defensive shield to sea and land as well as space-based military and civilian assets." This shield could protect from threats like anti-ship ballistic missiles, cruise missiles that evade radar, coronal mass ejections, military satellites, and even asteroids.
Dr. Pais's ideas center around the phenomenon he dubbed "The Pais Effect". He referred to it in his writings as the "controlled motion of electrically charged matter (from solid to plasma) via accelerated spin and/or accelerated vibration under rapid (yet smooth) acceleration-deceleration-acceleration transients." In less jargon-heavy terms, Pais claims to have figured out how to spin electromagnetic fields in order to contain a fusion reaction – an accomplishment that would lead to a tremendous change in power consumption and an abundance of energy.
According to his bio in a recently published paper on a new Plasma Compression Fusion Device, which could transform energy production, Dr. Pais is a mechanical and aerospace engineer working at the Naval Air Warfare Center Aircraft Division (NAWCAD), which is headquartered in Patuxent River, Maryland. Holding a Ph.D. from Case Western Reserve University in Cleveland, Ohio, Pais was a NASA Research Fellow and worked with Northrop Grumman Aerospace Systems. His current Department of Defense work involves his "advanced knowledge of theory, analysis, and modern experimental and computational methods in aerodynamics, along with an understanding of air-vehicle and missile design, especially in the domain of hypersonic power plant and vehicle design." He also has expert knowledge of electrooptics, emerging quantum technologies (laser power generation in particular), high-energy electromagnetic field generation, and the "breakthrough field of room temperature superconductivity, as related to advanced field propulsion."
Suffice it to say, with such a list of research credentials that would make Nikola Tesla proud, Dr. Pais seems well-positioned to carry out groundbreaking work.
A craft using an inertial mass reduction device.
Credit: Salvatore Pais
The patents won't necessarily lead to these technologies ever seeing the light of day. The research has its share of detractors and nonbelievers among other scientists, who think the amount of energy required for the fields described by Pais and his ideas on electromagnetic propulsions are well beyond the scope of current tech and are nearly impossible. Yet investigators at The War Zone found comments from Navy officials that indicate the inventions are being looked at seriously enough, and some tests are taking place.
If you'd like to read through Pais's patents yourself, check them out here.
Laser Augmented Turbojet Propulsion System
Credit: Dr. Salvatore Pais
A new study suggests that reports of the impending infertility of the human male are greatly exaggerated.
- A new review of a famous study on declining sperm counts finds several flaws.
- The old report makes unfounded assumptions, has faulty data, and tends toward panic.
- The new report does not rule out that sperm counts are going down, only that this could be quite normal.
Several years ago, a meta-analysis of studies on human fertility came out warning us about the declining sperm counts of Western men. It was widely shared, and its findings were featured on the covers of popular magazines. Indeed, its findings were alarming: a nearly 60 percent decline in sperm per milliliter since 1973 with no end in sight. It was only a matter of time, the authors argued, until men were firing blanks, literally.
Well… never mind.
It turns out that the impending demise of humanity was greatly exaggerated. As the predicted infertility wave crashed upon us, there was neither a great rush of men to fertility clinics nor a sudden dearth of new babies. The only discussions about population decline focus on urbanization and the fact that people choose not to have kids rather than not being able to have them.
Now, a new analysis of the 2017 study says that lower sperm counts is nothing to be surprised by. Published in Human Fertility, its authors point to flaws in the original paper's data and interpretation. They suggest a better and smarter reanalysis.
Counting tiny things is difficult
The original 2017 report analyzed 185 studies on 43,000 men and their reproductive health. Its findings were clear: "a significant decline in sperm counts… between 1973 and 2011, driven by a 50-60 percent decline among men unselected by fertility from North America, Europe, Australia and New Zealand."
However, the new analysis points out flaws in the data. As many as a third of the men in the studies were of unknown age, an important factor in reproductive health. In 45 percent of cases, the year of the sample collection was unknown- a big detail to miss in a study measuring change over time. The quality controls and conditions for sample collection and analysis vary widely from study to study, which likely influenced the measured sperm counts in the samples.
Another study from 2013 also points out that the methods for determining sperm count were only standardized in the 1980s, which occurred after some of the data points were collected for the original study. It is entirely possible that the early studies gave inaccurately high sperm counts.
This is not to say that the 2017 paper is entirely useless; it had a much more rigorous methodology than previous studies on the subject, which also claimed to identify a decline in sperm counts. However, the original study had more problems.
Garbage in, garbage out
Predictable as always, the media went crazy. Discussions of the decline of masculinity took off, both in mainstream and less-than-reputable forums; concerns about the imagined feminizing traits of soy products continued to increase; and the authors of the original study were called upon to discuss the findings themselves in a number of articles.
However, as this new review points out, some of the findings of that meta-analysis are debatable at best. For example, the 2017 report suggests that "declining mean [sperm count] implies that an increasing proportion of men have sperm counts below any given threshold for sub-fertility or infertility," despite little empirical evidence that this is the case.
The WHO offers a large range for what it considers to be a healthy sperm count, from 15 to 250 million sperm per milliliter. The benefits to fertility above a count of 40 million are seen as minimal, and the original study found a mean sperm concentration of 47 million sperm per milliliter.
Healthy sperm, healthy man?
The claim that sperm count is evidence of larger health problems is also scrutinized in this new article. While it is true that many major health problems can impact reproductive health, there is little evidence that it is the "canary in the coal mine" for overall well-being. A number of studies suggest that any relation between lifestyle choices and this part of reproductive health is limited at best.
Lastly, ideas that environmental factors could be at play have been debunked since 2017. While the original paper considered the idea that pollutants, especially from plastics, could be at fault, it is now known that this kind of pollution is worse in the parts of the world that the original paper observed higher sperm counts in (i.e., non-Western nations).
There never was a male fertility crisis
The authors of the new review do not deny that some measurements are showing lower sperm counts, but they do question the claim that this is catastrophic or part of a larger pathological issue. They propose a new interpretation of the data. Dubbed the "Sperm Count Biovariability hypothesis," it is summarized as:
"Sperm count varies within a wide range, much of which can be considered non-pathological and species-typical. Above a critical threshold, more is not necessarily an indicator of better health or higher probability of fertility relative to less. Sperm count varies across bodies, ecologies, and time periods. Knowledge about the relationship between individual and population sperm count and life-historical and ecological factors is critical to interpreting trends in average sperm counts and their relationships to human health and fertility."
Still, the authors note that lower sperm counts "could decline due to negative environmental exposures, or that this may carry implications for men's health and fertility."
However, they disagree that the decline in absolute sperm count is necessarily a bad sign for men's health and fertility. We aren't at civilization ending catastrophe just yet.
A year of disruptions to work has contributed to mass burnout.
- Junior members of the workforce, including Generation Z, are facing digital burnout.
- 41 percent of workers globally are thinking about handing in their notice, according to a new Microsoft survey.
- A hybrid blend of in-person and remote work could help maintain a sense of balance – but bosses need to do more.
More than half of 18 to 25 year-olds in the workforce are considering quitting their job. And they're not the only ones.
In a report called The Next Great Disruption Is Hybrid Work – Are We Ready?, Microsoft found that as well as 54% of Generation Z workers, 41% of the entire global workforce could be considering handing in their resignation.
Similarly, a UK and Ireland survey found that 38% of employees were planning to leave their jobs in the next six months to a year, while a US survey reported that 42% of employees would quit if their company didn't offer remote working options long term.
New work trends
Based on surveys with over 30,000 workers in 31 countries, the Microsoft report – which is the latest in the company's annual Work Trend Index series – pulled in data from applications including Teams, Outlook and Office 365, to gauge productivity and activity levels. It highlighted seven major trends, which show the world of work has been profoundly reshaped by the pandemic:
- Flexible work is here to stay
- Leaders are out of touch with employees and need a wake-up call
- High productivity is masking an exhausted workforce
- Gen Z is at risk and will need to be re-energized
- Shrinking networks are endangering innovation
- Authenticity will spur productivity and wellbeing
- Talent is everywhere in a hybrid world
"Over the past year, no area has undergone more rapid transformation than the way we work," Microsoft CEO Satya Nadella says in the report. "Employee expectations are changing, and we will need to define productivity much more broadly – inclusive of collaboration, learning and wellbeing to drive career advancement for every worker, including frontline and knowledge workers, as well as for new graduates and those who are in the workforce today. All this needs to be done with flexibility in, when, where and how people work."
Organizations have become more siloed
While the report highlights the opportunities created by increased flexible and remote working patterns, it warns that some people are experiencing digital exhaustion and that remote working could foster siloed thinking. With the shift to remote working, much of the spontaneous sharing of ideas that can take place within a workplace was lost. In its place are scheduled calls, regular catch-ups and virtual hangouts. The loss of in-person interaction means individual team members are more likely to only interact with their closest coworkers.
"At the onset of the pandemic, our analysis shows interactions with our close networks at work increased while interactions with our distant network diminished," the report says. "This suggests that as we shifted into lockdown, we clung to our immediate teams for support and let our broader network fall to the wayside. Simply put, companies became more siloed than they were pre-pandemic."
Burnout or drop out
One of the other consequences of the shift to remote and the reliance on tech-based communications has been the phenomenon of digital burnout. And for those who have most recently joined the workforce, this has been a significant challenge.
The excitement of joining a new employer, maybe even securing a job for the first time, usually comes with meeting lots of new people, becoming familiar with a new environment and adapting to new situations. But for many, the pandemic turned that into a daily routine of working from home while isolated from co-workers.
"Our findings have shown that for Gen Z and people just starting in their careers, this has been a very disruptive time," says LinkedIn Senior Editor-at-Large, George Anders, quoted in the report. "It's very hard to find their footing since they're not experiencing the in-person onboarding, networking and training that they would have expected in a normal year."
But it is perhaps the data around quitting that is one of the starkest indications that change is now the new normal. Being able to work remotely has opened up new possibilities for many workers, the report found. If you no longer need to be physically present in an office, your employer could, theoretically, be located anywhere. Perhaps that's why the research found that "41% of employees are considering leaving their current employer this year".
In addition to that, 46% of the people surveyed for the Microsoft report said they might relocate their home because of the flexibility of remote working.
A hybrid future
In looking for ways to navigate their way through all this change, employers should hold fast to one word, the report says – hybrid. An inflexible, location-centred approach to work is likely to encourage those 41% of people to leave and find somewhere more to their tastes. Those who are thinking of going to live somewhere else, while maintaining their current job, might also find themselves thinking of quitting if their plans are scuppered.
But remote working is not a panacea for all workforce ills. "We can no longer rely solely on offices to collaborate, connect, and build social capital. But physical space will still be important," the report says. "We're social animals and we want to get together, bounce ideas off one another, and experience the energy of in-person events. Moving forward, office space needs to bridge the physical and digital worlds to meet the unique needs of every team – and even specific roles."
Bosses must meet challenges head on
Although the majority of business leaders have indicated they will incorporate elements of the hybrid working model, the report also found many are out of touch with workforce concerns more widely.
For, while many workers say they are struggling (Gen Z – 60%; new starters – 64%), and 54% of the general workforce feels overworked, business leaders are having a much better experience. Some 61% said they were 'thriving', which is in stark contrast to employees who are further down the chain of command.
Jared Spataro, corporate vice president at Microsoft 365, writes in the report: "Those impromptu encounters at the office help keep leaders honest. With remote work, there are fewer chances to ask employees, 'Hey, how are you?' and then pick up on important cues as they respond. But the data is clear: our people are struggling. And we need to find new ways to help them."
Buildings don't have to be permanent — modular construction can make them modifiable and relocatable.