Risk-Based Monitoring: Widespread Implementation is Underway, but Still Messy
The industry has been abuzz about risk-based monitoring (RBM) for much of the last decade, with the FDA indicating that it would like to see more sponsors give it a try.
But in practice, many companies are only just beginning to fully grasp the ramifications of skewing away from having monitors go to sites every six weeks or so to check all data, and moving toward more centralized electronic monitoring of patient data based on triggers that show there may be problems at the site.
That was the thrust of the DIA session The Risk Assessment is Done: Now What? A Guide to Setting Up a Centralized Monitoring Plan.
“Organizations are really trying to grasp: what is this critical data, these critical processes, how are we going to do that?” said Linda Sullivan, executive director of Metrics Champion Consortium (MCC), an industry association that focuses on the adoption of standardized, consensus-based performance metrics.
MCC formed work groups around risk-based monitoring in 2015, and this summer will release survey results (about 80 people in key positions responded to questions about the particulars of RBM implementation at their companies) as well as a guidance document amid what is still a sea of confusion and anxiety about RBM — even about its very nature and definition.
“We realized people still define risk-based monitoring in different ways,” said Sullivan.
At the CRO level in particular, there’s much work to do in sorting out the hows of RBM, said those on the panel during the session.
“Though we’ve done close to 600 central monitoring reviews and 200 RBM studies, there are still lots of challenges with implementation; it’s moving and evolving,” said Olgica Klindworth, associate director of data analytics for PPD.
One of the chief roadblocks, said Klindworth, has been terminology. For instance, “key trigger point” is a common term, but does it mean the same to everyone using it? Not necessarily, she said. Some use “key risk indicator” or KRI, and others with something altogether different. In fact, not everyone calls the process “risk-based monitoring.” Some call it “centralized monitoring.” Some call it something else.
For CROs in particular, it’s important to bring standardization to the process as they work with so many different sponsors. If one sponsor is using one set of terms and another sponsor uses a different one, and the CRO uses yet another, that will cause confusion and possible errors, said Klindworth.
Beyond terminology, the industry also needs to decide on the trigger points themselves, said Nurcan Co?kun, global risk-based monitoring and technology solutions program manager at Medtronic.
Though every trial will of course have its own trigger points based on the nature of the protocol, Co?kun said it’s useful to have standards and KRIs in place for each trial. She shared the standard triggers Medtronic has generated:
- AE/SAE rates (under or over reporting)
- Enrollment rates (high or low enrollers)
- Screen failure rates (high rate)
- Protocol deviation rate (high rate of non-compliance)
- Overdue query rate (greater than 30 days)
- eCRF entry cycle time (long delays in data entry)
- CRA flag — PI oversight (CRA indicates “no” to PI has adequate oversight of study)
- CRA flag — Met with PI (CRA indicates “no” to “Met with PI during IMV.”)
MCC’s industrywide survey, due out later this summer, showed that 82 percent of respondents are now preparing a central monitoring plan.
When asked whether they now have a set of KRIs they use for each study, 46 percent of respondents said yes, and 41 percent indicated that they add study-specific KRIs based on information from the risk assessment.
Another 32 percent said they are still developing their approach to KRIs.
“Critical thinking” is a concerning area during this protracted birth of RBM, said Keith Dorricott, director at Dorricott Metrics & Process Improvement in the UK, formerly senior director of site start up and regulatory for INC Research, who worked closely with MCC on its survey and guidance document. The worry is that so many tight processes and SOPs will be put into place to make RBM “work,” that those whose hands are on the process each day will cease to think critically about changes that may need to be made midstream.
Said Dorricott, “There is a lot of discussion around critical thinking — the need for people to have the skills to be able to assess the information they are seeing in doing centralized monitoring, and [for them] to be able to dig down further to even know if they should be digging down further, and what actions they should take and what point should they escalate.”
After all, critical thinking is at the very core of RBM, he said.
Checklists seem like an innocuous tool meant only to help, but for those concerned with keeping critical thinking robust in the RBM process, checklists are concerning.
“Checklists have downsides,” said Dorricott. “They can encourage people to just go down the list one by one without really thinking about what they’re doing.”
These issues are discussed in-depth in the guidance document MCC will release later this summer, said Dorricott.
Another area of concern as the industry looks at fully adopting RBM? Duplication of effort. Who does what?
“More often than not, if you have multiple vendors and companies participating in a process, you will end up with some duplication and overlap,” said PPD’s Klindworth.
Despite the vital questions that remain about the very nature of the RBM process, Klindworth said it’s a favorable time to be involved in the shaping of the process that has the potential to streamline the industry, making data cleaner while making trials cheaper to conduct.
“It’s a great time for all of us, especially for CROs to have an opportunity to work with so many flavors of central monitoring so that we can learn and improve what we do and be better prepared,” she said. “The key is for all of us to remain flexible and nimble in this complex landscape.”