Study results: Registering clinical trials can make positive findings disappear
The ClinicalTrials.gov registry, which was introduced in 2000, appears to have had a noticeable impact on the number of reported positive and negative effects of heart disease treatments before and after that year, according to a recent PLoS ONE study.
A 1997 U.S. law required the trial registry’s creation, mandating researchers from 2000 on to record their trial methods and outcome measures before collecting data—a requirement not necessary before that date. Researchers found that in a sample of 55 randomized controlled trials, funded by the U.S. National Heart, Lung and Blood Institute (NHLBI), 57% of those studies (17 out of 30) published before 2000 reported positive effects from the treatments. Between 2000 and 2012, however, that figure sank to just 8% (only two among 25) with positive effects in clinical trials, as the majority of studies had negative findings.
The dramatic change suggests declaring outcomes in randomized clinical trials and the adoption of transparent reporting standards, as required by ClinicalTrials.gov, “may have contributed to the trend toward null findings,” wrote Veronica L. Irvin, a health scientist at Oregon State University, and her co-author Robert Kaplan, chief science officer at the Agency for Healthcare Research and Quality in Rockville, Md. They are the researchers of the PLoS ONE study titled “Likelihood of Null Effects of Large NHLBI Clinical Trials Has Increased over Time.”
In examining the clinical research results from the 55 large trials, the authors found no change in the proportion of trials that compared treatment to placebo versus an active comparator. They also ruled out possible sponsorship issues but saw a change after the newly required regulations went into effect in 2000.
“Industry co-sponsorship was unrelated to the probability of reporting a significant benefit,” they wrote. “Pre-registration in ClinicalTrials.gov was strongly associated with the trend toward null findings.”
The study also noted that industry co-sponsorships unfortunately were not always reported prior to the year 2000, going back to 1970, and medical journals did not uniformly require disclosure. A closer look at the disclosures revealed a financial consulting relationship between at least one author and industry in all of the cases.
“Industry influence would produce a bias in favor of positive results, so connections between investigators and industry is not a likely explanation of the trend toward null results in recent years,” the study states.
The final explanation for the trend toward null reports is that none of the trials were prospectively registered prior to 2000. After that year, all large NHLBI trials were registered prospectively in ClinicalTrials.gov, which included statements about the primary and secondary outcome variables. Having to state their methods and measurements before launching the trials makes it more difficult for investigators to selectively report some outcomes and exclude others once the study is over.
Another possible explanation for the decreasing rate of positive results is that medical care and supportive therapy have improved since 2000, making it difficult to demonstrate treatment effects because new approaches must compete with higher-quality medical care.
“We do recognize that the quality of background cardiovascular care continues to improve, making it increasingly difficult to demonstrate the incremental value of new treatments,” the study states. “The improvement in usual cardiovascular care could serve as an alternative explanation for the trend toward null results in recent years.”
The study stressed that their analysis is limited to large NHLBI-funded trials and to studies in cardiovascular outcomes in adults, and does not allow causal inferences to other clinical trials.
One reaction to the study raised concerns about similar problems with other analyses that are not highly rigorous and lead to many false positives in the literature. They are exacerbated by the current pressure to publish in academia.
“This study should be a wake-up call,” wrote Steven Novella, M.D., a neurologist at Yale University, in his NeuroLogica Blog. He called the study “encouraging” but also “a bit frightening” because it casts doubt on previous positive results.
“Loose scientific methods are leading to a massive false positive bias in the literature,” wrote Novella. “I do not go as far to say that science is broken. In the end it does work, it just takes a lot longer to get there than it should because we waste incredible resources and time chasing false positive outcomes.”
Email comments to Ronald at email@example.com. Follow @RonRCW
This article was reprinted from Volume 19, Issue 33, of CWWeekly, a leading clinical research industry newsletter providing expanded analysis on breaking news, study leads, trial results and more. Subscribe »