Build Quality into Trials from the Beginning to Avoid Disaster
Quality principles should be incorporated into clinical trial design from the beginning, says one pharma executive, to save time and money and avoid disasters that could cripple a trial.
More than 16 percent of new drug submissions to the FDA are severely impacted by quality issues and 32 percent of first-cycle submissions fail due to quality-related issues, Sharon Reinhard, executive director of Merck Research Labs Quality Assurance, told attendees at the 14th Annual FDA Inspections Summit last week. Of that 32 percent, 53 percent never win approval and 47 percent experience a median delay of 14 months, she said.
Reinhard has seen her share of quality disasters, and told her audience how a quality by design approach could have averted those failures.
In what she calls “Randomization Schedule Disaster #1,” Reinhard tells the story of a data manager at a CRO who was hired to create a randomization schedule for a pivotal trial. Mistaking the completed randomization schedule for another document, the manager accidentally emailed the schedule to the entire study team, which was supposed to be double-blinded.
Reinhard used the example to point to a failure in the CRO’s standard operating procedures — key documents were not required to be password protected or stored in a firewall-protected area.
Using the QBD approach, the randomization schedule could have been identified as a high-risk document requiring special handling, such as password protection. The SOP also could require a second person to sign off before distribution.
In “Randomization Schedule Disaster #2,” a CRO’s staff was directed to use randomized codes from a prior study to serve as dummy codes while the system was being set up. Under time pressure to activate the system, someone forgot to replace the dummy codes with authentic codes and patients were not randomized properly.
While the dummy codes may have looked identical to the actual randomized codes, Reinhard said, they were not. The dummy codes, if used at all, should have been altered to be obviously different from the real codes, for example by adding leading digits.
The system itself could have been identified as high risk, she said, and the SOP should have required a two-person sign-off procedure and included a checklist for review and release of the system.
The “Global Document ‘Version’ Catastrophe” involved a CRO hired to run a trial in 30 countries. Sites in five of the countries did not receive a third protocol amendment or the latest two versions of the informed consent form, which contained significant new safety information.
In that case, the problem was caused by a staffing issue. A trial administrator responsible for the five countries had to take an unanticipated medical leave, and the CRO didn’t find a replacement for two months. There was also no centralized monitoring of protocol signature pages or IRB approvals to detect the breakdown of communication.
Trials tend to run into difficulty in the hand-off of materials, Reinhard said. In this case, the CRO could have instituted a centralized monitoring of protocol signature pages or IRB approvals that would have caught the problem.
Reinhard recommends collecting informed consent data electronically so multiple ICFs can be tracked. Using real-time metrics on actual vs. anticipated ethics committee approvals also can help track risk at the country level.
In “The Case of the Delayed Interim Analysis,” Reinhard said, the planned interim analysis was delayed two months because database reconciliation with the trial’s electronic systems did not occur on schedule, causing numerous queries to be issued and reconciled. In addition, only eight of 20 investigators in the trial received training on the systems.
In that case, data management personnel had not adhered to the data management plan due to limited resources. The initial site activation checklist did not identify training requirements for investigators, and the pressure to go live and hit key checkpoints resulted in corners being cut.
Reinhard recommended identifying lack of data reconciliation and training as potential risks and requesting weekly and monthly status reports from all sources. She also suggested collecting metrics on the actual vs. anticipated number of investigators trained.
A vendor’s failure to report malfunctions led to trouble in “The Case of the “Missing” Primary Endpoints.” A trial to test a pain measurement scale contracted an electronic patient-reported outcomes (ePRO) provider to collect post-surgical pain reports from 200 patients at specific time intervals. Site staff reported that the ePRO devices appeared to malfunction during use.
The ePRO provider, which did not track the number of actual vs. anticipated entries, disputed the reports without conducting further investigation. After the issue was escalated to the vendor’s management, it was revealed that 30 percent of the scores were missing, resulting in the need to increase enrollment.
The problem could have been avoided, Reinhard said, by designating the primary endpoint as a high-risk element of the trial, collecting real-time metrics on scores captured and performing periodic review of patient and staff queries about device function.
She also stressed that the trial’s managers should have proactively asked patients and sites about their experiences using the outsourced technology.
The lesson learned from all of these situations, Reinhard said, is that sites, CROs and sponsors should start their QBD efforts by performing a pre-trial risk analysis of processes and protocols to identify weak or overly complicated points. Once issues are spotted, fail-safe measures can be built in to avoid problems. Reinhard also recommended using metrics designed to pick up breakdowns in critical points early in the trial.
Most of all, she said, make sure that planning and kick-off meetings are more than a dog and pony show. Really dig into the details, she said, and “jump into how you will ‘operationalize’ the specific trial at hand.”