Rater Success Begins with Open Communication, Targeted Training
Raters are integral parts of clinical trials that rely on subjective patient data. At the end of the day, sponsors need high-quality, reliable data from their trials, and raters bear direct responsibility for delivering these, especially if they’re assessing the primary outcome. But how do sites make that happen?
Sites must ensure raters have adequate support, a strong line of communication and the opportunity to be involved in the trial process early on, as well as adequate time and resources to focus on training and their assessments.
Rating scales “literally make or break the success of being able to demonstrate the study endpoints” and variability can occur even among raters trained the same way at the same time, Suzanne Kincaid, chief operating officer for Aperio Clinical Outcomes, told CenterWatch Weekly. She recommends that sites assign two raters to each trial who can consistently be available during patient visit hours and are able to back each other up.
Sites need to characterize and standardize how pain, mobility and other types of scales will be implemented in a trial, even if raters are experienced using the scales in other trials or implementing them in practice, Anne Blanchard, relationship site manager with Eidos Therapeutics and a clinical research executive consultant, adds. It’s also helpful for sponsors to designate a specific person as a point of contact for raters that can be used to quickly answer questions and address specific situations according to rater training and instructions, she advises. Sites should be clear with sponsors that they need this designated person for their raters; the person may be provided by a vendor or sponsors may designate a senior-level person specialized in the trial’s rating scales, such as a clinical operations director, senior psychiatrist or medical director.
Additionally, it may be helpful for sites to come up with reference examples raters can keep on hand. These should lay out expectations for considering responses/conducting evaluations and common mistakes made when evaluating the scales at hand. A reference document might provide a question-and-answer section featuring approved and validated responses a study team is using across all sites, Blanchard says.
In a pain trial, for example, rater consistency may be improved when raters have written examples of what each number on a pain scale means, says Heather Addington, clinical trial manager at Vanderbilt-Ingram Cancer Center. Doing this will help ensure that raters are using the same descriptions when they’re asking participants to rate their pain.
“If you want consistency, you have to spell it out somewhere, otherwise you won’t get it,” she says.
Raters are most often members of site staff but can be provided by third-party vendors if blinding, for example, is needed. It’s helpful to bring raters into the fold of a trial early on, Kincaid said.
To this end, allow raters to be part of the site qualification process and pretrial visits done by the sponsor/CRO. This will let raters learn what’s expected of them well in advance, instead of them seeing the final protocol shortly before the trial is set to begin.
High-quality raters have several attributes; first, they have strong experience with the trial population at hand, the presentation of the disorder’s core symptoms and variations, and the assessment scales or tools used to evaluate trial participants, says Mark Opler, chief research officer for WCG Clinical Endpoint Solutions. Raters on brain disease trials, for instance, will need a deep understanding in this area, as patients with brain diseases can and do appear very differently from one another.
To ensure raters devote proper time and focus to their duties, sites should step back and compare the flow of patients at the site against the availability of their raters, Kincaid advises. Doing this, a site might conclude that they should limit the days of the week patients come in for their study visits if such a maneuver is feasible. Taking this action can help give raters more time and breathing room.
“Maybe it’s just two days a week that are ideal for their rater staff schedules to ensure they have the same rater and aren’t burning out their staff on top of their already busy workdays,” she says. “If you are going to do research, you have to carve out time for research. It might mean sacrificing some other billable time in your normal practice.”
Although it’s essential that raters are well-trained, some raters say they’re still forced to go through training multiple times on the same scales or tools in short time periods. This task can be especially burdensome for those working on trials across different sponsors, Opler notes. If possible, rater training should apply across all of the trial programs they’re working on. Eliminating the burden of redundant training will help free up raters to put more time toward their primary duties, he says. It can also help cut down on performance-related issues, such as human error.
To reduce rater burden further, sites should have multiple raters trained on conducting key assessments, a strong track record of training participants to accurately report their symptoms and up-to-date, inspection-ready training records.
Having qualified raters “can be seen as just another function, a necessary service that has to be provided … or it can be seen as a differentiating factor. From the perspective of someone who interacts with lots and lots of raters all across the world, a site with high-quality raters who are given the appropriate support, status and consideration is an absolute delight to work with,” Opler says.