Vendor Metrics Should Look at Data from Different Angles, Expert Says
When developing a plan to gather and analyze vendor performance metrics, the first question to ask is how wide to cast the net for collecting data.
Metrics can be gathered and analyzed in multiple ways, but a reliable approach is to look at four levels: portfolio, study, country and investigative site, says Metrics Champion Consortium (MCC) Executive Director Linda Sullivan.
Sullivan says an organization can start developing a vendor metrics plan by holding an internal discussion about specific key performance questions (KPQ) it is seeking to answer. What MCC is trying to get at is “what is that critical success factor, that outcome that you’re trying to achieve,” says Sullivan.
In the newest MCC toolkit on vendor oversight metrics, Sullivan says aggregating metrics across all studies regardless of type at the portfolio level is especially useful for operational matters, such as CRO oversight, site contracting and risk management.
The next level involves aggregating data across all sites in a single study, which provides a deeper dive into metrics collected at the portfolio level, Sullivan says.
For example, what if an organization finds that 85 percent of studies in its portfolio are meeting startup targets? The next logical question, she says, is what studies in the 15 percent are not meeting targets and why are they behind. “So you have to be able to drop down to the study level,” she says. “And then within the study you might be saying, ‘are there certain sites that are causing this problem?’”
Similar to the study-level metrics plan, looking at metrics across all studies conducted in a single country gives a broader view that can point out unique issues.
No matter what level an organization chooses, the next step in metrics development focuses on defining the factors considered critical to success and then identifying KPQs that will help determine whether an organization is meeting its goals.
Metrics for working with vendors should consider four dimensions, Sullivan says: time, cost, quality and relationship. Critical success factors could include timely payment of investigators (cost), minimal number and appropriate handling of significant issues (quality), staying on schedule (time), and effective decisionmaking and communication (relationship).
KPQs should be developed for each critical success factor. For vendor oversight, MCC has developed 29 questions covering nine critical success factors. For example, for the critical success factor of minimal significant issues, questions include:
- Are significant issues escalated per the applicable plan or SOP?
- Are action plans developed in a timely manner?
- How many action plans have overdue actions?
- What percentage of action plans are found to be effective?
The next step is deciding what to measure to get the answers to these questions and how to aggregate the resulting data. Looking at the data from several different perspectives can provide a more complete view, Sullivan says.
For instance, looking at milestones achieved across all active studies will provide a different picture than aggregating the same data for only a three-month period or by study phase, she says.
“If you’re looking at this big aggregated percentage, it may take a long time before the number’s going to move because you’re averaging,” she says. “You could have a real emerging problem and you don’t realize it. Or you may be improving things, but you don’t see it because of all the bad performance in the past. And so at that three-month mark, you can start to see if there’s movement one way or another.”
“To look at performance across studies is not a slam dunk,” she says.
For more information on MCC toolkits, click here: https://bit.ly/2UNBKFs.
Upcoming Events
-
21Oct