on September 10th, 2014 No Comments
Say you’re a medical researcher. You slave over a project for months, even years, and you’re thrilled when a stellar journal agrees to publish it. That’s it, right? Well, no. Now, you need others to spot your work – and cite it in their studies. You can court citations just as you court Twitter followers: by producing high-quality content worthy of a bigger audience.
That said, sometimes bias creeps in. For example, studies by superstar scientists are cited more often than those by their junior colleagues — no surprise there. But now, Stanford medical resident Alex Perino, MD; cardiologist Mintu Turakhia, MD, MAS; and colleagues have shown that studies documenting higher success rates of a certain procedure are more likely to be cited than studies of the same procedure with lower success rates.
“This is an indication that we as clinicians and investigators need to be mindful of how we present the data,” Turakhia told me.
In a study released yesterday in Circulation: Cardiovascular Quality and Outcomes, Perino, Turakhia and other colleagues examined research papers on catheter ablation for atrial fibrillation, a treatment with widely varying success rates. For example, among the examined studies, the success of a single treatment varied between 10 and 92 percent. The variation is perfectly understandable, Turakhia said. Atrial fibrillation, an irregular heart rhythm, can be caused by a variety of underlying conditions and can vary in severity, he explained. The procedure itself, which uses energy to destroy tissue in key areas of the left atrium, can also vary, Turakhia said.
That’s why ablation for atrial fibrillation was an apt treatment to examine. The team included 174 studies with 36,289 patients published since 1990. They found that for every 10 point increase in reported success rate, there was an 18 percent increase in the mean citation count. The citation bias remained significant even when accounting for time since publication, the journal’s impact rating, sample size and study design.
The bias is important when considering the efficacy of new and evolving treatments, Turakhia said: “We just wanted to make sure the totality of evidence is being presented fairly and completely to readers of the medical literature, which may be clinicians, scientists, insurance companies and policy makers. However, in this case, we found that ablation could be perceived to be more effective than the totality of evidence would suggest.”
Turakhia said he hopes this study prompts other researchers to examine bias in other treatments and specialties.
Previously: Re-analyses of clinical trial results rare, but necessary, say Stanford researchers, John Ioannidis discusses the popularity of his paper examining the reliability of scientific research, A discussion on the reliability of scientific research, U.S. effect leads to publication of biased research, says Stanford’s John Ioannidis