The results of large clinical trials are used to make important clinical decisions. But the raw data on which these results are based are rarely made available to other researchers, perhaps due to concerns about intellectual property or giving a leg up to competitors in the field. But a new study by Stanford's John Ioannidis, MD, DSci, shows that the re-analysis of such data by independent research is critical: About one third of the time it leads to conclusions that differ from those of the original study.
The research was published today in the Journal of the American Medical Association.
Clearly, data sharing is an important step in making sure research is conducted efficiently and renders reproducible results
For the study, Ioannidis and his co-authors surveyed about three decades of research cataloged in the National Library of Medicine's PubMed database looking for re-analyses of previously published clinical-trial data. They found fewer than 40 studies that met their criteria (reanalyses using the original data to investigate a new hypothesis, or meta-analyses of several studies were not included) and, as I wrote in a release:
Thirteen of the re-analyses (35 percent of the total) came to conclusions that differed from those of the original trial with regard to who could benefit from the tested medication or intervention: Three concluded that the patient population to treat should be different than the one recommended by the original study; one concluded that fewer patients should be treated; and the remaining nine indicated that more patients should be treated.
The differences between the original trial studies and the re-analyses often occurred because the researchers conducting the re-analyses used different statistical or analytical methods, ways of defining outcomes or ways of handling missing data. Some re-analyses also identified errors in the original trial publication, such as the inclusion of patients who should have been excluded from the study.
Clearly, data sharing is an important step in making sure research is conducted efficiently and renders reproducible results - goals shared by the recently launched Meta-Research Innovation Center at Stanford (or METRICS), which Ioannidis co-directs. More from our release:
The fact that researchers conducting re-analyses often came to different conclusions doesn’t indicate the original studies were necessarily biased or deliberately falsified, Ioannidis added. Instead, it emphasizes the importance of making the original data freely available to other researchers to encourage dialogue and consensus, and to discourage a culture of scientific research that rewards scientists only for novel or unexpected results.
“I am very much in favor of data sharing, and believe there should be incentives for independent researchers to conduct these kinds of re-analyses,” said Ioannidis. “They can be extremely insightful.”
Previously: John Ioannidis discusses the popularity of his paper examining the reliability of scientific research, New Stanford center aims to promote research excellence and “U.S. effect” leads to publication of biased research, says Stanford’s John Ioannidis