Skip to content

Shaky evidence moves animal studies to humans, according to Stanford-led study

pathcroppedThe path to the development of a successful pharmaceutical or clinical treatment for human diseases is long and winding. Even those that appear safe and effective in studies with laboratory animals fail, far more often than not, to pan out in human clinical trials.  Stanford study design expert John Ioannidis, MD, DSci, led an international study published in PLoS Biology  today that pinpoints some causes of this discrepancy.

The researchers analyzed the results of thousands of previously published experiments on 160 potential interventions for neurological diseases. According to our release:

They determined that only eight of the 160 studies of potential treatments yielded the statistically significant, unbiased data necessary to support advancing the treatment to clinical trials. In contrast, 108 of the treatments were deemed at least somewhat effective at the time they were published.

So what's the cause? Probably not scientific fraud, which remains (thankfully) relatively rare. Instead it appears to be a combination of factors:

Ioannidis speculated that a reluctance to publish negative findings (that is, those that conclude that a particular intervention did not work any better than the control treatment) and a perhaps unconscious desire on the part of researchers to find a promising treatment has colored the field of neurological research. Obscuring access to studies that conclude a particular treatment is ineffective, while also publishing positive results that are likely to be statistically flawed, tilts the perception toward the potential effectiveness of an intervention and encourages unwarranted human clinical trials.

The researchers also suggest some solutions to what appears to be a pervasive problem in research:

Ioannidis and his collaborators at the University of Edinburgh in Scotland and the University of Ioannina School of Medicine in Greece say that animal studies of potential interventions can be made more efficient and reliable by increasing average sample size, being aware of statistical bias, publishing negative results and making all the results of all experiments on the effectiveness of a particular treatment - regardless of their outcome - freely accessible to scientists.

Previously: Neuroscience studies often underpowered, say researchers at Stanford, Bristol, Research shows small studies may overestimate the effects of many medical interventions and Animal studies: necessary but often flawed, says Stanford's Ioannidis
Photo by Phoenix Wolf-Ray

Popular posts