Avoid Deviations by Making Protocol Review a Team Effort
Sites should take an “all-hands-on-deck” approach to protocol review, engaging all study team members to ensure they’re taking on trials that are well-designed, operationally feasible and devoid of as many pitfalls as possible, according to one expert.
Design flaws, overly tight eligibility criteria and inconsistencies in a protocol can all hamstring a site’s chances of running a successful trial, and a site’s research coordinator, data manager, pharmacist and other staff all can weigh in on aspects of a protocol that impact their areas, giving the site a chance to either go back to the sponsor with suggested changes or turn down a trial that isn’t feasible for them.
The sponsored clinical research team at the University of North Carolina’s Lineberger Comprehensive Cancer Center convenes an all-staff meeting to evaluate protocols prior to their finalization, according to Kaitlin Morrison, the team’s director, enabling those who would actually do the work to share and consider concerns and suggestions that could help prevent substantial sponsor amendments and inconsistent data later on.
In addition to the principal investigator (PI), sub-investigators, CRCs and regulatory personnel, protocol review should involve any staff “that touch the protocol,” Morrison advised during MAGI’s Clinical Research Conference in Boston last week. This ensures the protocol is analyzed in its entirety from every angle, she said.
For example, Morrison’s site directs its CRCs to assess the protocol’s background section for clarity on the main points of the trial and determine if its objectives and endpoints are aligned with the timing and tasks required. CRCs are the ones seeing the patients, collecting data and entering them into case report forms, Morrison explained, so they are best positioned to spot discrepancies between the protocol and practical aspects of the trial.
“What we ask them to look at is if the protocol time and events table matches the study objectives and endpoints,” she said. “For example, if you’re doing some correlative analysis and you’re looking at persistence of CAR-T cells at one week, two weeks, three weeks as part of your endpoints, and they look at the table and you’re not collecting blood to evaluate that at week number two, they’re going to see the discrepancy there.”
It’s also advisable to involve a pharmacist in protocol reviews to look for issues involving the investigational drug, such as dose modifications for combination therapies that don’t make sense after the sponsors’ individually developed dose-modification guidelines are put in a protocol. CRCs should also ask questions when these dosage issues are identified, Morrison said.
She also tasks CRCs with reviewing visit windows (the range of time by which a scheduled patient visit can deviate) set in the protocol. Trials without adequate — or any — defined visit windows can lead to troublesome deviations for a site.
“I can’t tell you how many studies I’ve seen from many different types of sponsors where they have an assessment and it doesn’t have a window. It can’t always be done at that moment in the clinic. Operationally, it doesn’t make sense,” she said. “Do you want a deviation all the time, or is a couple-minute window, a couple-day window OK?”
CRCs can offer helpful insight on patient follow-up by asking if it can be done remotely or requires an onsite visit. And because they actually help conduct the research, CRCs can evaluate whether the timing of assessments is simply impossible for a site. For instance, if a patient ended up waiting several hours at a site before receiving the study treatment, a sample that needed to be collected 12 hours later would wind up being scheduled for 1:00 or 2:00 a.m., which would obviously not be feasible and lead to protocol deviations.
Morrison’s site directs its data coordinators to focus their review on specific aspects of a protocol, such as checking that the stated trial objectives fully line up with the endpoints and are clearly defined so that they can be reported to ClinicalTrials.gov. Data coordinators also confirm that the endpoints are specific and measurable and that any scales employed in the trial are included and/or referenced within the protocol.
Review meetings are critical for making sure all site staff members understand how to conduct the trial. In one real-life example shared by Morrison, a protocol of a two-drug trial (both of which had the potential for toxicities) required a 24-week course of giving drug A for four cycles, followed by a research biopsy and drug B given for four cycles. This was confusing to the sites involved, who didn’t understand that the 24 weeks were to be expanded when one of the drugs had to be held due to toxicity.
“It actually meant each patient should get the same number of doses of each drug, but that’s not how it was interpreted by sites. It led to the data being very inconsistent, depending on what site you were looking at, and kind of a disaster for the trial,” she said.
While sites have become more involved in reviewing protocols as part of their quality-by-design (QbD) approaches, there are other trial plans that should be reviewed just as closely, notes Crissy MacDonald, vice president of client delivery for WCG. For example, most activities related to new technologies and historical vendors, such as central labs, are described outside the protocol, MacDonald says. Logistics on implementing lab draws and shipments, setting up and returning technology and other tasks, for instance, are frequently outlined in the trial’s operational plans.
“These plans are often not reviewed by sites as part of the feasibility/design process,” she told CenterWatch Weekly, “and outside of the protocol assessments, these are plans that are dictating the how, why and when the site should perform activities related to patient treatment, data capture, etc.”
As part of a QbD approach, sponsors could involve sites and investigators in reviewing not only protocols but also executional plans, she said, “as well as allow the sites and sponsors to understand the resource burden that goes along with the trial.”