Skip to content

How "breakthrough" medical findings usually aren't

Over at The Reality-Based Community, Stanford professor Keith Humphreys, PhD, addresses how "breakthrough" medical findings are often unable to be replicated in subsequent studies. Referencing research examining whether fish oil pills can lower one's risk for heart attack and stroke - which had been suggested in some small studies - he writes:

...When there were only a little data available, fish oil looked like manna from heaven. But with new studies and more data, the beneficial effect has shrunk to almost nothing. The current best estimate of relative risk... is 0.96, barely below 1.0 [indicating there is no health benefit]...

Why does this happen? Small studies do a poor job of reliably estimating the effects of medical interventions. For a small study (such as Sacks’ and Leng’s early work...) to get published, it needs to show a big effect — no one is interested in a small study that found nothing. It is likely that many other small studies of fish oil pills were conducted at the same time of Sacks’ and Leng’s, found no benefit and were therefore not published. But by the play of chance, it was only a matter of time before a small study found what looked like a big enough effect to warrant publication in a journal editor’s eyes.

At that point in the scientific discovery process, people start to believe the finding, and null effects thus become publishable because they overturn “what we know”. And the new studies are larger, because now the area seems promising and big research grants become attainable for researchers. Much of the time, these larger and hence more reliable studies cut the “miracle cure” down to size.

In the comments section, it's noted that this is an area studied by Stanford's John Ioannidis, MD, who famously argued in a 2005 paper that most current published research findings are false.

Previously: Research shows small studies may overestimate the effects of many medical interventions, A critical look at the difficulty of publishing “negative” results and Studies reveal that what studies reveal can be wrong

Popular posts