Lab animals such as mice and rats can be trained to press a particular lever or to exhibit a certain behavior to get a coveted food treat. Ironically the research scientists who carefully record the animals’ behavior really aren’t all that different. Like mice in a maze, researchers in this country are rewarded for specific achievements, such as authoring highly cited papers in big name journals or overseeing large labs pursuing multiple projects. These rewards come in the form of promotions, government grants and prestige among a researcher’s peers.
Unfortunately, the achievements do little to ensure that the resulting research findings are accurate. Stanford study-design expert John Ioannidis, MD, DSci, has repeatedly pointed out serious flaws in much published research (in 2005 he published what was to be one of the most highly-accessed and most highly-cited papers ever in the biomedical field “Why most published research findings are false”).”
Today, Ioannidis published another paper in PLoS Medicine titled “How to make more published research true.” He explores many topics that could be addressed to improve the reproducibility and accuracy of research. But the section that I found most interesting was one in which he argues for innovative, perhaps even disruptive changes to the scientific reward system. He writes:
The current system does not reward replication—it often even penalizes people who want to rigorously replicate previous work, and it pushes investigators to claim that their work is highly novel and significant. Sharing (data, protocols, analysis codes, etc.) is not incentivized or requested, with some notable exceptions. With lack of supportive resources and with competition (‘‘competitors will steal my data, my ideas, and eventually my funding”) sharing becomes even disincentivized. Other aspects of scientific citizenship, such as high-quality peer review, are not valued.
Instead he proposes a system in which simply publishing a paper has no merit unless the study’s findings are subsequently replicated by other groups. If the results of the paper are successfully translated into clinical applications that benefit patients, additional “currency” units would be awarded. (In the example of the mice in the maze, the currency would be given in the form of yummy food pellets. For researchers, it would be the tangible and intangible benefits accrued by those considered to be successful researchers). In contrast, the publication of a paper that was subsequently refuted or retracted would result in a reduction of currency units for the authors. Peer review and contributions to the training and education of others would also be rewarded.
The concept is really intriguing, and some ideas would really turn the research enterprise in this country on its head. What if a researcher were penalized (fewer pellets for you!) for achieving an administrative position of power… UNLESS he or she also increased the flow of reliable, reproducible research? As described in the manuscript:
[In this case] obtaining grants, awards, or other powers are considered negatively unless one delivers more good-quality science in proportion. Resources and power are seen as opportunities, and researchers need to match their output to the opportunities that they have been offered—the more opportunities, the more the expected (replicated and, hopefully, even translated) output. Academic ranks have no value in this model and may even be eliminated: researchers simply have to maintain a non-negative balance of output versus opportunities. In this deliberately provocative scenario, investigators would be loath to obtain grants or become powerful (in the current sense), because this would be seen as a burden. The potential side effects might be to discourage ambitious grant applications and leadership.
Ioannidis, who co-directs with Steven Goodman, MD, MHS, PhD, the new Meta-Research Innovation Center at Stanford, or METRICS, is quick to acknowledge that these types of changes would take time, and that the side effects of at least some of them would likely make them impractical or even harmful to the research process. But, he argues, this type of radical thinking might be just what’s needed to shake up the status quo and allow new, useful ideas to rise to the surface.
Previously: Scientists preferentially cite successful studies, new research shows, Re-analyses of clinical trial results rare, but necessary, say Stanford researchers and John Ioannidis discusses the popularity of his paper examining the reliability of scientific research
Photo by Images Money