Survey: Investigators find difficulties in paper source documents
Worldwide Clinical Trials has announced findings from a recent global survey that evaluated preferences of investigators who participate in central nervous system (CNS) studies that employ rater training and surveillance methodologies.
Created and conducted by Worldwide, this unique survey sampled the opinions of more than 1,400 primary investigators, sub-investigators and raters responsible for patient assessments. The survey was designed to understand their preferences, frustrations, and benefits related to rater training and rater surveillance for CNS clinical studies, as well as the potential impact of various methods of surveillance on rater engagement on data quality.
The findings demonstrate clear preferences for certain methods of rater training, with video demonstration of a practice quiz and certification videos being the most preferred. The mock interview, where site raters demonstrate their ability to conduct the assessment in front of an expert rater, was the least preferred method, due to the perceptions of the process being intimidating and imposing. While source document review was the favored method of rater surveillance, respondents noted that submitting paper source documents was also one of the more frustrating aspects of surveillance, along with setting up audio/visual equipment and logging into multiple systems to enter data.
Douglas Lytle Ph.D., Worldwide’s executive director of Clinical Assessment Technologies, said, “Understanding site rater preferences is essential to enrich the current rater training and data monitoring methods that are known to directly impact study outcomes, and our study revealed some interesting insights on this topic that the industry can use to its advantage.
“Importantly, the preferences observed should not impact the rigorous training and surveillance procedures that are proven to increase the reliability of outcomes. For example, raters reported that they do not like audio/video recording of the assessments that they perform. Rather than concluding that this method should not be performed, the frustrations identified should be used to refine procedures and educate investigators on why these methods are critical to the detection of efficacy signals from the study,” said Lytle.