It’s hard for even the most rigorous of scientists to argue with an elegant experimental design and a pretty result. But all too often, researchers are finding, bias can lead scientists to faulty conclusions.
The challenge is to identify how scientists go wrong and how we can improve science so it works faster and more efficiently to arrive at the truth. To find and avoid science’s seductive pitfalls, researchers are studying science itself, a field called meta-research.
Now, a paper by three Stanford researchers that delves into those pitfalls appears in Proceedings of the National Academy of Sciences. For their work, the team analyzed the results of 50,000 studies in 22 fields of science. “I think that this is a mapping exercise,” said senior author John Ioannidis MD, DSc. “It maps all the main biases that have been proposed across all 22 scientific disciplines. Now we have a map for each scientific discipline, which biases are important and which have a bigger impact…”
The researchers’ analysis showed that by far, the greatest bias came from small studies, while other sources of bias had relatively small effects. The team also found that small and highly cited studies and those in peer-reviewed journals seemed more likely to overestimate effects; U.S. studies and early studies seemed to report more-extreme effects; early-career researchers and researchers working in small or long-distance collaborations were more likely to overestimate effect sizes; and, not surprisingly, researchers with a history of misconduct tended to overestimate effect sizes.
On the other hand, studies by highly cited authors who published frequently were not more affected by bias than average. Research by men was no more likely to show bias than that of women. And scientists in countries with very strong incentives to publish, such as the United States, didn’t seem to have more bias than studies from countries where the pressure was less. These results confirm, with much greater accuracy, previous studies on retractions and corrections and studies using more indirect proxies of bias.
“Science,” said Ioannidis, “is the best thing that has happened to humans.” But that doesn’t mean it can’t be improved.
In our news release on the paper, Ioannidis encourages scientists to choose the best ways to reduce the biases afflicting their field: “This has to be a grass-roots movement. It has to be something that scientists believe is good for their science to do. Top-down approaches, such as institutions and funding agencies trying to promote best practices, could also help, but it has to be an agreement, and an agreement among all these stakeholders. And obviously, scientists need to believe that this is something that will help the results and their science to be more reliable.”
Previously: Research transparency depends on sharing computational tools says John Ioannidis, Stanford’s John Ioannidis on “underperforming big ideas” and Taking the brakes off science
Image by Barry Dahl