
Home » CTTI report recommends risk-based monitoring
CTTI report recommends risk-based monitoring
August 15, 2011
Almost 90% of those conducting clinical trials within the commercial drug industry always perform on-site visits to their study sites. In contrast, about a third of academic, government or cooperative groups involved in research do. And yet, neither monitoring method is based on any evidence that it improves the quality of the data, as no such evidence yet exists.
That was one of the key findings of the first study undertaken by the Clinical Trials Transformation Initiative (CTTI), a group founded in 2008 by the FDA and Duke University Medical Center (DUMC) to identify practices that will improve the quality and efficiency of clinical trials.
Said Briggs Morrison, senior vice president of worldwide medical excellence at Pfizer and the study’s first author, CTTI chose to focus on the vast differences in the way research entities interpret the concept of monitoring because monitoring constitutes about 25% of the cost of a clinical trial. “It was low hanging fruit when looking for ways to improve efficiency,” he said.
The FDA isn’t exactly clear on how it wants researchers to undertake monitoring, and this has given birth to divergent monitoring cultures. Morrison explained that academic, government and cooperative groups tend to identify very experienced investigator sites, then work with them almost exclusively, auditing them only yearly. The rationalization is that these sites are well known and trusted, so why send a monitor to them every six weeks? But pharmaceutical companies and CROs have instead over the years developed a style of hyper-vigilance, expressed now through what some would say is excessive monitoring, sending a monitor to every site, no matter how efficient or trusted, every six weeks to check the data.
And not just some of the data, but all of the data. Said Judith Kramer, executive director of CTTI, this is a style that has evolved since the mid-1980s, when, she recalls, the monitoring among all parties conducting research was much more aligned. “We’d have a very rational plan for checking those things that were most important—the primary data points,” she said. “Then we’d do random samples of the data on other endpoints.”
Since then, though, industry has shifted aggressively toward trying to check all data points. Kramer said this has been driven by fear of getting flagged by the FDA coupled with ambiguity about what exactly the FDA wants.
“When bad things would happen to other companies, the rest of the industry would think: ok, we’ve now got to check every data point,” said Kramer. “Pharma companies have so much at stake when they submit an application for approval, the last step of which is to have an audit by the regulatory agency, and if the results aren’t good, it has a tremendous downside, including delays and non-approvals. It’s not surprising that over time, industry would evolve to do what they perceive is right.”
And the results are mixed, said Morrison. “There are pros and cons to both styles. The upside to more frequent monitoring is that if there’s a problem, you’re going to catch it. But the academic model is much more efficient,” he said.
The CTTI report, titled “Effective and Efficient Monitoring as a Component of Quality in the Conduct of Clinical Trials” and published in the journal Clinical Trials, is based on data collected from 65 organizations that conduct or sponsor clinical trials (18 academic or government institutions, 11 contract research organizations and 36 organizations from the pharmaceutical and device industries). Each organization completed a survey to obtain information about methods for study oversight and quality assurance.
Another key finding from the survey was that electronic data capture (EDC) is not being used for central monitoring, though it could be. While 83% of respondents acknowledged having adopted new technologies that make central monitoring possible, only 12% said they always or frequently use them for this purpose. Said Morrison, such technology could be very helpful in identifying sites that may have a problem, then paying far more attention to those sites.
CTTI’s report includes recommendations that all parties involved in research switch to risk-based monitoring. That is, identifying those sites that are least likely to have a problem, and not conducting on-site visits at those, while spending more time (and money) monitoring sites that are new or have been identified as sites that may present a problem. This approach also involves identifying key endpoints and focusing on those much more intensely than the other data points.
Said Morrison, Pfizer recently wrapped up a risk-based monitoring pilot project on one of its products, with the FDA working alongside the company to help identify the key endpoints. The two then agreed to focus primarily on those going forward with the trial, not getting caught up in “the small stuff,” said Morrison. A report on that pilot project is due out soon, and could be pivotal for industry.
In addition, to help industry adopt the risk-based monitoring approach and thus hopefully save millions on how it conducts trials while also keeping the quality of those trials high, CTTI is sponsoring workshops in which sponsors who have used the method detail exactly how it’s done. “We’re trying to make sure people get concrete details so they don’t have to imagine how they would go about doing this, but instead can see and hear about others’ experiences,” said Kramer.
Suz Redfearn
Upcoming Events
-
14Apr