As an immunology PhD student in the late 1990s, I spent countless hours hunched over cages on the lab bench analyzing the immune cells of mice. I was determined to find something wrong with them. If these gene knockout mice had any discernible abnormality - be it hyperactive T cells or fewer B cells or a bit too much of this molecule or that one - I could submit the data to a scientific journal. And if the paper got published, the world would be enlightened to learn that among the many thousands of proteins out there, this particular one - that one my knockout mouse lacked - was important for doing such-and-such things in such-and-such situations
Alas, there wasn’t a darn thing wrong with those mice. At least nothing I could find.
And so that data remains hidden - tucked within lab binders that got boxed up and hauled across the country days after defending my doctoral thesis. The data resurfaced 11 years later, but only for a few minutes, when it was time to move again and we had to remind ourselves what all lurked within those dusty boxes in the garage.
While preparing for that cross-town move a few months ago, I was working on a story on the inefficiency of biomedical research for the fall issue of Stanford Medicine magazine. I interviewed a handful of scientists who work in this emerging area called meta-research, among them Steven Goodman, MD, PhD, and John Ioannidis, MD, DSc, co-directors of Stanford’s new Meta-Research Innovation Center (METRICS).
In a phone chat with Goodman, I was floored to learn that nearly one-third of clinical trials funded by the NIH Heart, Lung and Blood Institute don’t get published. Much like my ill-fated mouse experiments, that trial data gets stuffed into file cabinets or abandoned on hard drives, never seeing the light of day. And yet these are incredibly costly studies involving hundreds, perhaps thousands, of people, conducted and analyzed with utmost rigor and ethics - and with much more at stake than a few years’ delay on a grad student thesis.
Researchers have studied the impact of this “file drawer problem” - that is, the failure to publish studies that don’t show hoped-for effects. In some cases, it wouldn’t take many “filed” analyses to make a published result statistically non-significant. And media coverage can make matters worse, as the hottest science and medicine stories aren’t generally the ones with the most solid evidence.
Ioannidis has mulled over this, too. In own research, he saw firsthand how often and easily things go wrong, even with the best of intentions. A college math whiz, Ioannidis applied his analytical skills in the realm of biomedicine. Using statistical models, he analyzed how various factors - such as sample size and conflicts of interest - influence the likelihood that a research study will yield a true result. The answer appears in the title of his 2005 PLoS Medicine essay: “Why Most Published Research Findings are False.”
It’s a hard-hitting message. I was curious how people react to Ioannidis’ findings. (He gets more than a thousand lecture invitations per year to proclaim, essentially, that much of the biomedical literature is misleading or wrong.) Are they receptive? Not too surprised? Relieved that researchers are giving serious attention to a problem that’s been around for years? Depressed?
Toward the end of our hour-long conversation, I asked Ioannidis how folks have responded to his lectures. His tongue-in-cheek reply: “The sample of scientists I communicate with is a biased sample. They’re very interested in what I have to say.”
If you’re not yet among the “biased,” check out this video of Ioannidis’ recent talk at Google in Mountain View. Or read my story.
Esther Landhuis, PhD, is a freelance writer.
Previously: Stanford Medicine magazine traverses the immune system, Re-analyses of clinical trial results rare, but necessary, say Stanford researchers, John Ioannidis discusses the popularity of his paper examining the reliability of scientific research, New Stanford center aims to promote research excellence and “U.S. effect” leads to publication of biased research, says Stanford’s John Ioannidis
Illustration by Kotryna Zukauskaite