Putting a LID on Risk Assessment

October 22, 2018

Researchers hoping to get a handle on their clinical trials’ risk assessments ought to put a LID on it.

The Metrics Champion Consortium (MCC) says it’s worried that drug sponsors and investigators are overwhelmed with risk assessment. MCC has come up with a scoring system that it hopes will help researchers prioritize their risk analysis and abatement.

LID is a system where researchers or sponsors use a three-point scale for each of three categories: likelihood a bad event will occur, the impact of the event and the detectability of the event. They then multiply the three scores by each other to come up with a risk score. The higher the score, the stronger the potential risk.

“It’s not about going through every single risk and scoring every one. It’s really about critical thinking,” says Keith Dorricott, director, Dorricott Metrics & Process Improvement Ltd., who walked an audience of trial professionals through the process during a WCG webinar last week.

The beauty of the system is its simplicity, says Dorricott, an MCC ambassador. It not only helps sponsors or researchers make sense of where their risk priorities are but also helps them focus on mitigating those risks in advance.

If a risk score is high, sponsors or researchers can try to preempt trouble through advanced training or by taking other preventive measures. A high-impact score, for instance, might be lowered or headed off by creating backup paper copies of computer records that might be lost, Dorricott says.

For more information, listen to the webinar here:

Researchers Claim New Opening to Crowdsourced Drug Data
MIT researchers say they’ve developed a way for drug sponsors and researchers to share information without compromising private patient details or intellectual property.

There have been several efforts to create open-source programs to give researchers a searchable database for drug information and speed up clinical trials. But researchers have been caught in a paradox: Programs that were sophisticated enough to process the massive amounts of data for something like drug interaction couldn’t be safely encrypted and programs that could be safely encrypted couldn’t handle the massive datasets.

Now, researchers Brian Hie, Hyanghoon Cho and Bonnie Berger say they’ve solved the riddle by building a neural network that relies on a program cryptographers call “secret sharer.” It’s a complex process but it boils down to the system generating random numbers for any piece of data a company or researcher marks as confidential.

Instead of trying to run all the data through a large algorithm—which can get clogged up by non-numeric fields that ordinarily would be marked as “confidential” or “X” — the system runs several simple, blinded operations (basic division or multiplication) at once and then recombines them.

It allows sponsors to blind confidential data while still analyzing potential drug interactions, the researchers report in Science.

“To do a problem the size we’re working, it ordinarily requires tons of machinery, which can take years or petabytes of data communication to compute,” Berger tells CenterWatch. “We’ve found a way to scale that into millions in days.”

The researchers have only run models — they see their work here as a “proof-of-concept” effort — but they’ve already successfully run confidential drug interaction analyses based on publicly available information, Berger says. She recognizes, though, that she and her colleagues may have some work to do to convince drug sponsors that their newfangled contraption works as advertised.

“I hope people get how cool this is,” she says. “I feel we’ve solved the scientific problem and hopefully that will open the door to solving the cultural problem. We know it works and we’ve demonstrated it works and we’re hoping that it’ll be enough to convince people to crowd-source their data.”

Drugmakers Balk at Safety Plan Proposal for Expanded Trials
Leading drugmakers are pushing back against an FDA proposal to require independent safety committees for expansion cohorts in clinical trials.

In August, the agency issued a draft guidance that would require first-in-human trials that use multiple expansion cohorts — essentially, open-ended trials that don’t follow the rigid Phase I-II-III format — to hire an independent safety or data monitoring committee to do “real time review of all serious adverse events.”

Four drugmakers criticized the proposal in public comments.

AstaZeneca said the proposed rule would be “burdensome” but work in some cases after the Phase II doses for a given drug are figured out. The safety requirement would represent “a significant departure from existing appropriate practice across the industry,” and the mandate for real-time review “presents significant challenges,” said Caleb Briggs, the company’s director of regulatory affairs.

Pfizer and Regeneron each said there was some room for independent safety committees, but Carol Haley, Pfizer’s U.S. regulatory policy executive, suggested exceptions for trials with small sample sizes. Regeneron said that safety panels should be optional but that sponsors who don’t use them should explain why in their drug applications.

Anne-Marie McNicol, GlaxoSmithKline’s manager for global regulatory affairs, asked the FDA to be more specific in its proposal requiring IRBs to meet “more frequently than on an annual basis.”

Some commenters pushed back on proposals that exclude sponsors or investigators from serving on safety committees. “Wouldn’t committees including the investigators and sponsor representatives and one independent person be enough?” IQ Consortium, the biopharmaceutical trade association, said in its comments.

The Association of Clinical Research Organizations said that it wasn’t opposed to safety panels but that sponsors or investigators should be allowed to serve on them if roles and responsibilities were “defined and documented before the member takes up their duties.”

Read the full comments here:

Dataset May Kickstart Trials
Researchers have released a massive dataset that maps the molecular makeup of a particularly nasty form of leukemia and that organizers hope will spur new rounds of clinical trials.

Three years and $8 million in the making, the new dataset, released by the Howard Hughes Medical Institute in Oregon, contains advanced analysis of nearly 700 samples from patients suffering from acute myeloid leukemia, taken from academic centers around the country.

“My lab’s view was that we had to do something different,” says Brian Druker, who oversaw the project.

The database is free and open to all researchers.

“They can now do a computational experiment in a couple of hours” Druker says, “but the real proof that this works is going to be when we test it in clinical trials.”

About 20,000 patients are diagnosed with AML every year. The prognosis is grim — the five-year survival rate for older patients is less than 10 percent, Druker says. Researchers have identified 11 genetic classes of AML and found thousands of different mutations among patients’ cancer cells.

Most of the funding for the project came from the Leukemia and Lymphoma Society, Druker says.