Eliminate Patient Subjectivity and Data Errors with Integrated Endpoints Approach
Sites, sponsors and CROs are increasingly using integrated endpoint approaches that examine electronic patient-reported outcomes (ePROs) in clinical trials to eliminate the subjective nature of the data and root out any errors. But as the use of ePROs in trials is growing, the challenges of using them effectively to support endpoints are coming into focus.
Such ePRO systems “lead to more additional work than I think sponsors intended, and we often question if the value, in terms of study data, is worth the effort or cost,” said Christine Senn, chief implementation and operations officer for site network IACT Health.
One way to make participant-reported outcomes (PROs) more objective and reliable is by harnessing technology that improves the capture of data from patients rather than relying on existing methods of self-reporting. Enhancing the precision of patient reporting along with the real-time monitoring and correction of that data is essential to endpoint accuracy.
“Anything that is wearable or electronic in nature — and that relies less on a patient documenting and/or a site interpreting and entering [data] on behalf of the patient — will improve the overall collection of endpoints,” says Karri Venn, president for research at LMC Manna Research. “This shift is critical,” she says, adding that adopted approaches and their associated technology “must be easy for both the patient/subject and the site. This will improve data integrity and accuracy in general.”
Training and retraining of participants are key to getting ePRO data that can be used to show whether endpoints have been met, says Andrea Marraffino, executive director for clinical science at WCG Analgesic Solutions, which has developed a training approach backed up by an automated system that helps sites correct inaccurately entered data on the fly.
“Having an automated system continuously checking for missing data points and PRO completion compliance,” Marraffino says, “in addition to many other variables, takes this onus off CRAs and site staff, ensures accurate monitoring of the data and implements consistent training in order to improve data quality.”
Monitoring ePRO entries manually would place an incredible burden on site staff and CRAs, Marraffino says. Both groups tend to be overworked already and may have limited resources for handling the task, she adds.
Automated monitoring of ePROs also can benefit participants, Senn pointed out. Having an ePRO system that checks for statistical anomalies could help participants avoid potentially devastating consequences, especially getting kicked out of a trial for accidentally entering incorrect data. “Most systems will not allow a patient to correct their entries into the ePROs,” Senn said. “We all make mistakes sometimes, and to have someone want to enter a clinical trial and tell them they can’t because they clicked a wrong button is terrible. But when the trial offered them their only shot at getting a treatment for a condition they’re suffering with, that is absolutely heartbreaking. And even the [principal investigator] contacting the sponsor/CRO to explain the issue doesn’t fix it; their protocol is always to accept the electronic entry above all else.”
Kalahn Taylor-Clark, vice president and global head for patient-centered outcomes and innovation at Sanofi, said the sponsor is focusing on measurement tools with those participant considerations in mind. Sanofi is already evaluating ePRO and electronic clinical outcome platforms that meet the needs of both patients and clinicians, she added.
One issue with ePROs is that the tools themselves are variable. “It’s not a lab test, it’s not precise, it’s not consistent,” Timothy Bailey, CEO of the trial site organization AMCR Institute, told CenterWatch Weekly. “What we’re trying to do in a large-scale trial with an imprecise tool, is to try to tell the difference between Agent A, Agent B and placebo. But with imprecise instruments, it is very difficult to show differences. And the only way you can get that information is from patients — who don’t tell you everything, they’re not great communicators, they don’t know what’s important.”
And the lack of standardized ePRO tools makes it hard on sites whose sponsors provide them with different tools for different trials.
“It’s not like we have a choice of what tool we use,” Bailey said. “Could you imagine if we actually could use the same tool for every study? Instead, it’s usually a different tool for every study, from a vendor who hasn’t funded their technical support department enough.”
Senn concurs. Data entry is sometimes redundant, she said, with participants entering data into an ePRO system and site staff doing likewise into an electronic data capture (EDC) system. “It doesn’t make sense to me that when a sponsor is finding a vendor for ePRO and a vendor for EDC, that they wouldn’t ensure the two software systems integrated so that the middleman — manual data entry on the site level — would be completely removed.”
Bailey says sites still have reasons to be skeptical about how well PRO tools will work in their trials. “The biggest problem, from my perspective, is training and support. Supporting electronic data capture (EDC) systems are hard enough, the PRO tool is just one more thing.
That said, Bailey feels the right digital tools have the potential to be transformative — just as long as they are easy for site staff and trial participants to use. “These tools have to be more intuitive because the current generation requires support and a lot of work from sites. The idea is to make them easy to use, even engaging and fun.”