Stop Over-Reporting Adverse Events, FDA Officials Tell Sponsors
Sponsors and sites need to do a better job analyzing suspected adverse events before sending reports to the FDA, say two senior agency officials, who fear that the current over-reporting they see may lead to missing important safety signals.
Too often sponsors and sites are filing IND safety reports for every event, fearing they might miss reporting something important, says Robert Temple, deputy director of the Center for Drug Evaluation and Research (CDER).
But that attitude “flies in the face of what we’re really hoping for, which is serious analysis of the events that are happening to find the things that matter,” Temple told attendees at a WCG webinar on IND safety reporting last week. “If you just dump everything in, then you might miss something that’s important and that could be a disaster,” Temple says. “If there isn’t a serious analysis, you’re not really protecting the public or protecting patients the way you promise to.”
“We have specifically said that is not what we want you to do,” he added. “We do not want to hear about every serious event, especially something that happens in the population even without the drug. We want [sponsors] to analyze the rate of these events in the treated and the untreated group and then send it to us as the aggregate analysis. We quite clearly don’t want them if they don’t meet that test.”
The FDA revised its regulations and guidances on safety reporting in 2010, 2012 and 2015, added Jacqueline Corrigan-Curay, director of CDER’s Office of Medical Policy, to clarify what kinds of events the agency wants to see reports on. The new regulations introduced the phrase “suspected adverse reaction” to replace “associated with use,” which is a broader concept.
“A suspected adverse reaction is one for which there is a reasonable possibility that the product caused the response and reasonable possibility … that there is evidence to suggest the positive relationship,” Corrigan-Curay said. “This is changing from ‘a relationship cannot be ruled out,’ which could make almost anything reportable. You really need to think about, is there evidence and what is that evidence.”
Corrigan-Curay advised sponsors, investigators and sites to group events into two classes. The first group consists of single events that are uncommon and strongly related to drug exposure or those that are not commonly related to drug exposure but are uncommon in the population.
“The larger group of events are those that you need to do a little bit of an aggregate analysis of, and that would be that you’re seeing something that occurs more frequently in the drug exposure group than in what your control is,” she said. “And that would be the evidence that it’s possibly related or it’s starting to occur at an increased rate above what was listed in the protocol or investigator brochure.”
Based on conversations with sponsors, academic research centers and other sites, Stephen Beales, WCG senior vice president of scientific and regulatory, estimates as much as 90 percent of adverse event reports sites are receiving from sponsors don’t fit the FDA’s criteria of a reportable event.
Temple attributes this partly to sponsors’ anxiety that the FDA will cite them for noncompliance and partly to the amount of work it takes to analyze the causes of each event. “It’s not so simple to do that,” he said. “And if you’re a small company with no safety experts … you may find it hard to find a group to do it. But I think the answer is that you have some obligation to get a safety assessment group together so that you can.”
Beales concurred. Sponsors want to uphold their safety obligation, “but it’s only really the largest pharmas with the scientific expertise, large pharmacovigilance departments, who have been able to implement [FDA policy]. It’s proven more difficult for the small, emerging and mid-size ones.”
Sponsors may have trouble fielding the resources to conduct the kind of analysis the FDA says it wants, but failure to weed out unreportable events places a burden on the sites that have to review all of their sponsors’ reports and determine which ones need to be passed on to the trial’s IRB. Some sites are pushing back, Beales noted, adding that many of the largest medical centers in the U.S. have a written policy that they will not report to their IRBs events they think don’t fit the criteria of an FDA-reportable event.
Temple responded that creating this kind of policy does not violate FDA intentions because the institutions are doing the kind of evaluation sponsors should have done. “That’s not ignoring it,” he said. “That’s reaching a judgment about whether there’s a plausible causal relationship. If they look at the report and conclude that it’s not an adverse event and therefore don’t send it to their IRB, it’s hard to argue with that.”
Beales also noted a possible contradiction between safety reporting guidelines offered by the International Council on Harmonization (ICH) and the FDA’s regulations. ICH E6, the guideline on good clinical practice, says an event shall be reported if a “reasonable possibility [of causality] cannot be ruled out,” but FDA regulations are more specific, requiring “evidence to suggest a causal relationship.” Given that ICH E6 has been accepted by the FDA as official guidance, Beales asked, are we in effect asking sponsors to follow two separate standards?
“I think those two phrases are not equivalent,” Temple responded. “And we’ve said pretty clearly what we think the standards should be. What we mean is we want evidence to suggest [causality]. And we’ve given examples of what that means. Looking at the data that way is part of responsible behavior by a sponsor. And that’s how you find things. This is part of what our sponsor is supposed to be doing to protect patients: analyzing, looking at it.”
Asked whether the agency is planning any new guidances on the subject, Temple responded, “I think the first thing we have to do is get a better idea of just what we’re actually getting.”
“In the event that we get more details, I don’t think it’s out of the question that we would write a guidance on how to do this, how to do this better and what the typical errors are and why it’s very important to emphasize why doing it the wrong way not only makes work for people; it gets in the way of learning about the things that matter. And I think that’s our most important and most critical argument. And that’s the one that I think we should be prepared to talk about.”