There's an interesting interview (subscription required) in tomorrow's Science magazine with Deborah Zarin, MD, who directs the ClinicalTrials.gov database at the National Institutes of Health. I use the database frequently for research and also often refer members of the public to the site when they contact me to ask about participating in clinical research. But I confess I hadn't really thought much of its inception (a public database of clinical trials was mandated by Congress in 1997 as part of the FDA Modernization Act, apparently) or its requirements (as of 2007, all Phase II-IV trials of drugs and devices must be included, and as of 2008 a summary of results must be posted within one year of each trial's completion - regardless of whether the results have been published).
The interview, and some quick background research (check out this article about a visit Zarin made to Stanford's Center for Health Policy/Center for Primary Care and Outcomes Research for some good information) I did before this post, was eye-opening for me - both in terms of the complexity of her task and in how poorly some researchers comply. Zarin describes her immersion into the world of clinical trials in this way:
I call it my introduction to the sausage factory. It appears that there are a number of practices in the world of clinical trials that I hadn't been aware of; it surprised a lot of people. For example, researchers might say, this is a trial of 400 subjects, 200 in each arm, and when they came to report results, they would be talking about 600 people. We would ask them to explain. They would say, “We are including 200 people from this other study because we had always intended to do that.” … There were a lot of—what would I call it?—nonrigorous practices.
And in answer to a question about how some trials are conducted, Zarin replies:
We are finding that in some cases, investigators cannot explain their trial, cannot explain their data. Many of them rely on the biostatistician, but some biostatisticians can't explain the trial design.
So there is a disturbing sense of some trials being done with no clear intellectual leader. That may be too strong a statement, but that's the feeling we are left with.
Frankly, I found Zarin's comments a little alarming. But progress is apparently being made. Zarin reports that the existence of the database is spurring many investigators to be far more careful in how they describe their trials and the outcomes they are hoping to measure. I should hope so. I know from experience how useful a clinical trials database can be, assuming it's done correctly. Fingers crossed that everyone, particularly those who design trials and input the data gets the same message.