Using real-world data (RWD) and real-world evidence (RWE) in clinical trials is growing in popularity among both industry and the FDA, but both groups agree that all stakeholders need to be on the same page when it comes to definitions and ways to measure results.
“There is a tremendous interest in making use of the vast amount of data that’s already been collected in healthcare systems to more efficiently generate evidence,” the FDA’s Robert Temple told participants at a joint FDA and Duke University workshop last week.
“What people mean by RWE, and the specific study designs to be considered, is not clear,” said Temple, who is deputy director for clinical science in the agency’s Center for Drug Evaluation and Research. “In fact, the specifics of the data generated by an RCT (randomized clinical trial) within a healthcare system could vary tremendously.”
In a draft guidance issued in May, the FDA provides the following definitions:
But even these definitions may not entirely solve the problem, according to comments on the draft guidance the agency has received. While they applaud the FDA’s attempts to incorporate real-world evidence (RWE) into regulatory decisionmaking, two major sponsors’ comments indicate more work is needed.
Pharma giants Novartis and Gilead believe the agency’s draft guidance on identifying drug submissions that use RWE is too limited in scope — while the Association of Clinical Research Organizations (ACRO) recommended adding standardized summary assessments of the quality of data underlying RWD submissions. The association also recommended adding a glossary of key terms to aid understanding.
The clinical evidence listed for RWE in the guidance is too limited, Novartis said, suggesting the agency expand its scope to include patient reported outcomes (PROs) and quality-of-life factors.
Novartis stressed that the guidance needs to clearly distinguish between RWD and RWE and asked the agency to clarify how RWE can be applied in single-arm trials that lack external/historical arms.
Gilead also recommended that the guidance be expanded. The company said the agency should include sNDAs and sBLAs in its list of submissions and allow lab data linked to medical claims or electronic medical records as a source of RWD. Additionally, the different types of observational studies — cohort, case control and cross-sectional studies — should be detailed.
Speakers at the FDA workshop rang the same bell, saying the ability to accurately define and quantify outcomes that matter most to decisionmakers (e.g., patients, clinicians, regulators, payers and caregivers) has remained elusive.
“An exploding amount of RWD, paired with new and increasingly sophisticated methods of turning that RWD to RWE, have demanded a call to make that evidence actionable for a wide range of clinical decisionmakers,” said workshop moderator Gregory Daniel of Duke University’s Margolis Center for Health Policy.
Prior to generating RWE from clinical trials, it’s imperative to develop a consensus on which specific outcomes are the most helpful.
Outcomes, whether objective or subjective, may not be easily identified, Temple explained. “Whether a person did or did not have an outcome of interest, and the severity of that outcome, is not always obvious, even for ‘hard endpoints,’” he said.
Regardless, there is a need to identify and capture outcomes that are relevant to patients and to define whether the endpoints are also relevant to stakeholders and regulators. Once these outcomes are determined and agreed upon by a multidisciplinary network, researchers can then identify which RWD elements are valid reflections of those outcomes.
Accuracy, validity and reproducibility of RWD used to generate RWE are a cause of concern. Electronic health records, for instance, may be helpful for identifying outcomes of prescribing practices but lack insight on medications filled or dispensed.
And in the process of measuring outcomes in RWD, some form of measurement error will inevitably occur, said Sean Tunis, founder and director of Stanford University’s Center for Medical Technology Policy. The key is to measure and account for that error. “Ignoring measurement error and reporting about error but not correcting for it is bad practice,” Tunis said.
The discussion of RWE’s role in clinical trials also is taking prominence in the UK. The National Institute for Health and Care Excellence (NICE) has issued its own call for comments on plans for using real-world data.
NICE is considering using data from audits of procedures, registries that collect data on how particular treatments are used, surveys of patients using services, and data on national trends, such as the number of people who have a condition. The deadline for submitting comments to NICE is Sept. 13.
Read the nine comments on the FDA draft guidance here: https://bit.ly/2XHjTxF.
Read NICE’s statement and submit comments here: https://bit.ly/2ZCwsMf.