FDA Looks for Relevance and Reliability of RWD Used in Clinical Trials, Experts Say
The FDA has signaled its support for the use of real-world data (RWD) — information gleaned from health records, patient registries and other sources — through pilot programs and guidances but has yet to set standards for the quality of that data, leaving sponsors, investigators and sites to find their own way to evaluate its relevance and reliability.
When designing or working with a protocol for a trial featuring RWD, it’s important to consider several factors, says a former FDA official, including standardizing across different data sources.
Many aspects of healthcare settings can impact the overall quality of data obtained from their records, says Jan Pierre, principal for IQVIA Quality Compliance Solutions and a long-time quality expert who has served in the past as an FDA investigator and senior regulatory/quality assurance advisor for HHS.
Differing data collection and storage processes as well as the lack of standardized database structures across healthcare entities are particularly problematic, Pierre told listeners during an IQVIA-sponsored webinar last week.
Source data, too, are not always of equal quality due to regional and global differences in standards, terminologies and data exchange formats. And a wide range of methods and algorithms are used to create aggregating datasets. All of this should be carefully considered when selecting potential data sources, Pierre said, because any RWD presented to the FDA will need to be mapped to the agency’s data submission standards.
To defend the choice of data sources to the FDA and other regulators, she said, sponsors should be able to provide the reasoning behind the selection, background information on the healthcare system producing the data (including any specified method of diagnosis for the disease and the degree to which information is gathered, if available) and a description of the healthcare system’s prescribing and use practices, if applicable.
Detail any previous experience with and use of the selected data source, she added, such as in safety studies. And for non-U.S. data sources, sponsors should explain factors that might impact the study results’ generalizability to U.S. patients.
“There are differences in the practice of medicine around the world and between healthcare systems, which may affect the relevance of the data source to the study question,” Pierre said.
Time intervals in reporting periods also are important to consider when evaluating the data’s interpretability, a component of relevance, said Kristin Zielinski Duggan and Sally Gu, partner and associate at Hogan Lovells, respectively. Patients in real-life settings may not be followed for as long as they would be in clinical research, they told attendees at a WCG FDAnews webinar. It’s important to know if your data source has the length of followup the FDA will want to see. Data on confounding factors, such as medication patients are taking, is critical in RWD as well because this information would normally be collected in clinical trials, they said.
To demonstrate data source reliability, Pierre said the protocol and analysis plan should both include the data provenance (documentation showing the origin and journey of the data) and how those procedures could impact data integrity and the study’s validity. The FDA will want to see how data were gathered and whether the people and processes behind the data collection/analysis offer enough assurance that errors were kept to a minimum and data quality and integrity are sufficient.
“Reliability … is how the data were collected, which [the FDA] refers to as data accrual, and then the quality and integrity of that data — what people and processes were in place during the time of collection and analysis that ensures there’s not errors [or] … bias,” Duggan explained.
When examining data, Pierre recommends assessing their completeness, conformance and plausibility, documenting the quality assurance/control plan and defining a set of procedures for ensuring data privacy and integrity. While the FDA does not officially support any particular guidelines or checklists, it’s important to check data against the original source, she says.
Duggan echoes the critical importance of verifying data with original sources, although this is something that won’t be feasible for all RWD, she noted. RWD may not have been monitored or managed like they would be in a clinical trial or they may be outdated. Some may have been inconsistently collected, for example, or be missing data on key variables, while historical data may no longer be accurate due to changes in standard of care.
The FDA doesn’t expect all RWD to be at the level of data gathered in a clinical trial, she said, although it does prefer clinical trial-like sources, such as registries and prospectively designed studies appraising medical records. “The closer the data [are] to a clinical study, the closer that the quality controls are to a clinical study, the better,” she said.
Pierre said that while the FDA has not yet settled on a definitive RWD quality framework, it has issued guidances for the use of RWD, and organizations in the U.S. and abroad have issued study design and execution tools and templates to use. She expects the agency to further its work on RWD guidance in the coming year.
“We’re embarking on territory that has yet to be laid out in a concrete, definitive regulatory manner,” she said. “We are developing standards that do not exist to date. Next year, I think, will be much more exciting and we’ll come into agreement on what this [RWD] quality management system program might look like.”