
Home » Sponsors Should Ensure Remotely Conducted Outcome Measures Are Still Valid and Reliable
Sponsors Should Ensure Remotely Conducted Outcome Measures Are Still Valid and Reliable

April 20, 2020

As the COVID-19 pandemic pushes sponsors toward conducting remote outcome measures, it’s important for them to ensure that data they gather and the conclusions drawn from them are still valid and reliable.
“Even when we’re in circumstances that require quick decisions and rapid movement, we still have an obligation to do what we can to assure ourselves, regulatory agencies and other stakeholders that the data we’re collecting is still valid, and that the conclusions we make based on that data are thereby valid,” said Mark Opler, chief research officer of WCG MedAvante-ProPhase, an electronic clinical outcomes assessment group.
The first step is a simple one — look over prior research for guidance. There may be research that has already been done on validity and reliability questions related to conducting outcome measures remotely, and many such questions have already been answered “to a degree,” Opler said at a WCG Clinical webinar last week on COVID-19.
Next, sponsors should think about conducting a nested validity study — a substudy within a trial that analyzes data and evaluates the extent to which it behaves like or resembles data obtained through more traditional methods. True nested validity studies compare the two methods of evaluation side-by-side, often on the same patient, and they are frequently small in size, carefully designed and “very analytically rich,” he said.
Opler said it is of the utmost importance that sponsors keep “a very close eye” on the quality of the data being collected, urging “as near to real-time as possible” quality evaluations to catch any potential anomalies.
“Detection of potential anomalies, changes, anything that looks out of the ordinary … if it were my study, I’d want to know about it soon, so that I could make … decisions about how to proceed,” he said.
Regulatory agencies have accepted data collected remotely prior to the coronavirus pandemic, Opler said, citing one of several examples: in 2019, the FDA approved a rapid-acting antidepressant based on remotely collected data — 24,000 separate evaluations done by telephone. “I think we can take heart that there is some precedent for this, and that some of the methods used in the past for remote evaluation may serve as a path forward.”
Nathaniel Katz, chief science officer of WCG Analgesic Solutions, expanded on outcome measures, listing criteria that should be evaluated to ensure a trial’s performance of outcome measures remains stable during the outbreak:
- Test-retest reliability should be evaluated by monitoring the test-retest reliability of critical outcome measures over time;
- Temporal consistency should be evaluated by determining a measure’s variability over time;
- Internal reliability of multi-item measures should be evaluated using Cronbach’s alpha or another measure of inter-item correlations; and
- Relationships between measures should be evaluated by simultaneously looking at the concordance of two measures of the same symptom, and by evaluating the relationship between two correlated measures, such as pain and physical function.
When evaluating the impact of COVID-19 on study results, sponsors should consider certain factors. For example, they should consider whether their study results are positive or negative and if the between-group difference (such as the difference between treatment and placebos) vary before and after the pandemic.
In addition, sponsors should analyze the characteristics and baseline symptom intensity of the study population to see if they varied in relation to the crisis, and consider if study results were “robust” to the pandemic.
Katz also suggested that sponsors use infrastructure and techniques already in place to prepare for accountability that will arise in the future when data is being analyzed. Sponsors should realign their central statistical monitoring (CSM) techniques to monitor for any changes in performance of critical assessments, he recommended.
“People will be starting to get data in different ways — telephone interviews, home assessments, videos, audio recordings,” he said. “It’s much better to get the data than to not get the data. Missing data is the worst possible situation to be in.”
With certain sites differing in their capabilities and training for remote data collection, CSM techniques can also be used to determine if a site should be shut down, he said. Interim analyses should also be considered to determine if a study really needs to go on. Studies nearing their end may have already demonstrated positive results or futility to ever be positive, and interim analyses that “are extremely useful” right now may help sponsors determine if it’s worth it to continue, he said.
To listen to the webinar, click here: https://bit.ly/2xAKh5e.
Related Directories
Upcoming Events
-
14Apr