Science is supposed to work like this: A researcher tests a question with an experiment, produces results of the experiment and publishes the work so it can be evaluated by peers. Other scientists can then run the same experiment and see if they get the same results. But if the results don’t match the first experiment, it can be tough to actually get these “negative” findings published. And as an article in Nature News reports, this is a problem.
The article begins by discussing a controversial study on premonition in college students. Ed Yong writes that three research teams tried unsuccessfully to replicate the study findings and were unable to publish their negative results – which isn’t an unusual thing:
Positive results in psychology can behave like rumours: easy to release but hard to dispel. They dominate most journals, which strive to present new, exciting research. Meanwhile, attempts to replicate those studies, especially when the findings are negative, go unpublished, languishing in personal file drawers or circulating in conversations around the water cooler. “There are some experiments that everyone knows don’t replicate, but this knowledge doesn’t get into the literature,” says [Eric-Jan Wagenmakers, a mathematical psychologist from the University of Amsterdam]. The publication barrier can be chilling, he adds. “I’ve seen students spending their entire PhD period trying to replicate a phenomenon, failing, and quitting academia because they had nothing to show for their time.”
Many in the field of psychology believe that things need to be change, but the extent of the problem is still debated:
Some scientists still question whether there is a problem, and [Brian Nosek, a social psychologist from the University of Virginia,] points out that there are no solid estimates of the prevalence of false positives. To remedy that, late last year, he brought together a group of psychologists to try to reproduce every study published in three major psychological journals in 2008. The teams will adhere to the original experiments as closely as possible and try to work with the original authors. The goal is not to single out individual work, but to “get some initial evidence about the odds of replication” across the field, Nosek says.
Some researchers are agnostic about the outcome, but [Hal Pashler, a psychologist from the University of California, San Diego,] expects to see confirmation of his fears: that the corridor gossip about irreproducible studies and the file drawers stuffed with failed attempts at replication will turn out to be real. “Then, people won’t be able to dodge it,” he says.
The article states that while psychiatry and psychology are the fields that have the greatest tendency to publish only positive results – studies where the hypothesis is confirmed – many disciplines face the same problem. Stanford researcher John Ioannidis, MD, regularly points out problems in medical research; in a now-famous essay in PLoS Medicine, he argued that most published research is false.
In a time where public distrust of science continues, this is disheartening. But I hope articles like this are a call to action.