Rules of Vaccine Approval May Be Changing, But Statistical Analysis Tools Remain Constant
In the past, drug trials have taken months or years to conduct with reams of data to analyze before a sponsor could go to a regulator for product approval. But under the emergency conditions caused by the pandemic, vaccine approvals will be based on early data analysis and much smaller data sets.
While interim trial data are not usually used as proof of an investigational product’s efficacy, the urgent need for a vaccine has led drugmakers to use the data for Emergency Use Authorization (EUA) submissions, an approach WCG Statistics Collaborative President Janet Wittes says is valid under the circumstances.
“Even though the data [gleaned from interim analyses] will be uncertain, even though there will be a lot of variability and uncertainty and inability to really chase out the effects, at least you’ll have something,” she says. “At least you’ll have a window into the efficacy of the specific treatment in various groups.”
And this is true of any trial, not just COVID-19 vaccine trials, Wittes says. She told attendees at a WCG Clinical webinar last week that researchers conducting late-stage trials for Moderna and the Pfizer-BioNTech partnership had expected a 60 percent efficacy rate for their vaccines based on early trial results, but the results from interim analyses by independent data monitoring committees (DMCs) were better than expected: 95 percent efficacy in both cases. In addition, the Pfizer-BioNTech vaccine proved 94.5 percent effective among participants over age 65, a highly vulnerable population.
For the Pfizer-BioNTech two-shot COVID-19 vaccine, an efficacy target of 60 percent meant that if in the trial population of 43,538 participants enrolled, 100 cases of the virus were expected in the placebo group, 40 cases would be expected in the treated group. Wittes said the 60 percent target was “approximately equal to a flu vaccine in a good year.”
However, the design of the Pfizer-BioNTech trial projected 164 statistical events — specifically, a COVID-19 infection that occurred more than seven days after receiving the second dose. According to Wittes, participants had to be symptomatic with a positive polymerase chain reaction test in order to be recorded as an event; cases in all other situations were not counted.
Pfizer and BioNTech originally planned to conduct four interim analyses, with the first taking place once there were 32 confirmed cases of the disease. Wittes said the companies skipped the first look for unknown reasons but found 94 cases when they took their second look. Of those, fewer than nine of the COVID-19 cases occurred in participants that actually received two doses of the vaccine. There were a total of 38,955 participants receiving two shots, with half receiving the vaccine and the other half receiving placebo.
Meanwhile, Moderna reported last week that it hit 95 percent efficacy in its two-shot vaccine trial, which had 30,000 participants. Moderna had originally planned two interim analyses and said the trial would be declared finished once 151 cases were recorded — in this case, the primary endpoint was the number of COVID-19 cases confirmed and reviewed by a panel of independent experts starting two weeks after the second dose of the vaccine.
The company said it had expected 53 cases during the first interim analysis, but Wittes said it was unclear if Moderna’s announcement that it found 95 cases overall — 90 in the placebo group, five in the vaccine group — was from the first or second observation, but added that a 95 percent efficacy rate was of similar effectiveness to a measles vaccine. “That’s really high efficacy,” she said. “Very dramatic results.”
Both the trials had a 1-to-1 randomization rate and a confidence interval (CI) of 30 percent, meaning that researchers working on both vaccines would be very confident that the efficacy rate for the medicines would be above 30 percent. CIs provide researchers with a possible range of values that show how safe and effective a drug will be on the entire population, Wittes says.
In randomized trials, the assumption is that while the total trial population may include participants of different races, ages, socioeconomic groups, etc., randomizing participants results in subgroups of participants who have the same basic characteristics as the whole, says Wittes.
“Of course, that’s a simplifying assumption,” Wittes says. But it’s an assumption that allows studies to extrapolate data from one subgroup and apply it to the whole. The idea, which applies to any trial, is that the effects of the drug being studied are constant across the trial population because all of the subgroups have been randomized so their participant characteristics mirror the composition of the trial population as a whole.
“The basic assumption in randomized trials is that the effect of treatment … is approximately constant across the group,” Wittes said.
But, she cautions, “you don’t want to stop and declare efficacy unless the results are so overwhelming … that you know that it is extremely unlikely that the rest of the data will change the trajectory so much that there’s no benefit.”
“In the case of these two vaccine studies, where everybody was already randomized and either got their vaccine or their placebo, the DMCs said they saw overwhelming efficacy, but they didn’t say stop the studies,” Wittes said. “They said to keep following these patients, so we know what’s going to happen in the long run.”
While DMCs can recommend stopping a study if it’s deemed futile or has overwhelming efficacy, Wittes said it would be ridiculous to assume all population groups have the same degree of efficacy in any particular study. “We know that immune response is less robust in the elderly,” she said. “We may look at other age groups, racial and ethnic groups or gender. These are scientific questions that need to be thought through scientifically, and usually there’s not enough data in a trial to answer all the questions we would like to ask about the various subgroups.”
DMCs are charged with looking at interim trial data as opposed to sponsors and investigators, who run the risk of being influenced to make changes if they see data too early in the trial process, according to Wittes. “In order to keep the integrity of the trial, you need to make sure that people who are involved in the trial don’t know the results, so they feel comfortable and ethically responsible continuing the trial as it is.” But she said it was possible there could be several different outcomes from a single randomized clinical trial.
Most randomized trials include two primary goals, Wittes said, namely that researchers don’t falsely declare that a drug or treatment has a benefit or falsely declare the opposite. She said the consensus approach among researchers is to assume at the start of a trial that a placebo and an investigational product will have the same effect. Under a conventional, two-arm vaccine trial, researchers would declare the vaccine as the better option within a 2.5 percent probability.
Wittes explained that researchers need to adjust their analysis of any trial so that the probability of declaring a benefit from the drug being studied remains below that 2.5 percent probability.
“So we can ask questions but we need to specify which questions we are asking in a way that preserves the probability and which ones are really exploratory and don’t preserve the probability,” she says.