Skip to content

How Medical Research Can Harm Us All

“If all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes.”
Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Doctor and journalist Ben Goldacre, reinforced by Stuart Armstrong, is calling attention to what might be the worst, biggest example of intentional confirmation bias in recent history, that is killing and harming many of us. This is the problem of much medical drug research.


Writes Goldacre:

“Negative data goes missing, for all treatments, in all areas of science. The regulators and professional bodies we would reasonably expect to stamp out such practices have failed us. These problems have been protected from public scrutiny because they’re too complex to capture in a soundbite. This is why they’ve gone unfixed by politicians, at least to some extent; but it’s also why it takes detail to explain. The people you should have been able to trust to fix these problems have failed you.”

Goldacre is referring to the way we conduct medical trials on drugs.

“Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques that are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer.”

This means negative results, alternative or incredibly bad side-effects, comparisons to other trials with larger, more varied sample sizes, are all ignored or glossed over. What does this mean for us, as non-medical people? It means our doctors who prescribe these drugs are doing so on faulty information, on evidence massaged into truth by the same hands that also hold the broader context behind their backs.

It matters because of the complexity of the situation. Doctors are doing what (good) doctors do: looking at the available evidence, assessing the risks, benefits and impact, discussing it with patients and coming to a conclusion on treatment. But the problem begins even before the patient enters her office: The doctor will work on conclusions reached that already ignored proper scientific and research conduct.

This is an elaboration on a simple blunder, which can be parsed under two categories: numbers in context and confirmation bias.

Numbers & Context

If we’re told that 2,000 people were successfully treated by Drug X, that might sound impressive.

However, the first question we must as ask, as scientifically inclined individuals, is “2,000 out of what?” I urge you to always be cautious of “raw numbers”: these must be placed within a context, we must know the denominator, so that we have a processed figure that actually tells us the truth. In other words, the “real number”.

If it’s 2,000 out of 2,010, that is indeed an optimistic conclusion. But 2,000 out of a 100,000 is not. Would you think such a drug is effective when only 2% of subjects showed positive results? Of course not (the real number would be much higher, probably, given the effectiveness of the Placebo effect).

Yet, this is precisely what happens with shoddy research results and omissions which currently occur on a vast scale. Essentially, Goldacre is highlighting that doctors are indeed prescribing drugs that are in reality either not effective (our Drug X above) or are worse than alternatives. And doctors prescribe drugs because doctors, essentially, are looking at the raw number as the real number.

However, it’s not completely raw, since the number does come from trials that have been conducted that are meant to manufacture this number into a real one. We might say it’s half-cooked. But this is, indeed, quite unhealthy. With obvious raw things, we will stay away: it’s the ones that looked prepared on the outside that usually causes the most problems since we ingest them thinking they’re fine. And this is what’s happening.

We’re using uncooked numbers because the designs are poor, because negative results are ignored, so the number is never in a position to reflect the truth.

But Why?

Journals, manufacturers, and indeed the media – where many of us get our information on medicine – are mostly only interested in positive results. Whereas science – and especially medical science – isn’t actually interested solely in positive results. It’s interested in incremental acquisitions of facts, testing previous hypotheses against new data, better designs, which are long, slow and rather unexciting. To reach a “Curiosity landing on Mars” moment, to tear up at the first picture of distant galaxies, takes enormous amount of time.

But news must be reported! And quickly.

Thus when some new drug shows some positive results, we leap on it like starved dogs. We all want a cure to HIV/AIDS, cancers, and so on. Any news that hints at reaching this is eaten up, embossed and put on the front-page. Scientists I’ve spoken to often express surprise at this; it happens because their often tentative claims are translated into positive-sounding press-releases, then coloured in by media outlets. According to this exaggerated process, it should be the case that, basically, every published scientist in the world should’ve won the Nobel Prize by now (perfectly illustrated here)!

All of this then makes its way into our voting hands and therefore into our policy-makers.

But what about all the other research showing negative results? What about research indicating it’s all harmful? Unfortunately, there’s nothing flashy or sexy about scientists demolishing our hopes – but that’s because science and evidence isn’t aligned to our hopes: it’s aligned to truth, to reality, to what is the case, not what we want to be the case. Thus again we come to the conflict: we want positive results, but science is not about positive results but a process to tentatively reach the truth.

We hear what we want to hear; we remember and believe what makes us feel good. Anyone speaking to the contrary is labelled a naysayer, pessimist or, worse, is ignored completely. Drug manufacturers and trials, as Goldacre reports, are basically exploiting this very human process of loving positives, of loving confirmation, and hating or dismissing dissent. As many scientists like Michael Shermer and Daniel Kahneman have shown, it’s much harder to give up beliefs than to acquire or reinforce them.

But this is dangerous. Just because it’s good to send out positive results, doesn’t make the effectiveness of the drug true. The results from a particular study, or even several, may be true, but it’s not sufficient to conclude a particular treatment truly is effective unless we’ve had access to contradictory, negative results too (should there be any and hopefully are conclusions from well-designed studies). But our doctors are operating on these results to medically intervene on our behalf. Instead of getting a proper meal, we’re often being force-fed undercooked meat.

Conclusion

Supposedly, Oliver Wendell Holmes, Sr. asserted: “If all medicines in the world were thrown into the sea, it would be all the better for mankind and all the worse for the fishes”. I wouldn’t go that far, but I would be that concerned.

It shouldn’t have to be said that this is not true for every single drug, study, business or doctor, but it is a problem nonetheless that effects all these.

Many have expressed this terrible problem in medicine before. I’ve heard it from my father and colleagues for years. What we’re seeing are businesses manipulating a system, fraught with legal grey areas and an inability to recognise failure, all so that the businesses themselves survive. This isn’t a creepy ‘Big Pharma’ scare or whatever conspiracy theorists call it: this is a very real, very dangerous problem that affects all of us and our loved ones. I urge you to read Goldacre’s piece and get his book, to find out what we can do so that medicine is about our survival, not primarily manufacturers’.

Further Reading: Be sure to read fellow BT Blogger David Ropeik on scientists themselves perpetuating bad thinking about science.

UPDATE: My superb maths skills showed earlier where I wrote the result of Drug X as 0.02%. Thanks for correction James Motteram.

UPDATE II: This article has been reblogged at io9. Interesting comments arising.

Image Credit: Armin Kübelbeck (source)

Sign up for the Smarter Faster newsletter
A weekly newsletter featuring the biggest ideas from the smartest people

Related

Up Next
Over the weekend, I was interviewed by Dan Barker and Annie Laurie Gaylor from the Freedom from Religion Foundation for their Freethought Radio show. That episode is now available for […]