Researchers at Cincinnati Children’s Hospital have come up with a novel approach to the ever-challenging problem of patient recruitment for clinical trials: a mathematical algorithm that enables computers to predict whether a patient will accept an invitation to enroll in a trial.
A type of machine learning technology, the algorithm essentially teaches computers to make predictions about patient behavior by analyzing and interpreting patient demographic and socioeconomic data, while factoring in characteristics of the trial itself, such as length. The technique has been widely used for a variety of clinical tasks, such as identifying signs and symptoms of specific diseases.
Although the use of demographics and other data to help identify potential trial participants is not new, the concept of automating the process to predict a patient’s attitudes toward joining is an innovation. “It’s the first time I’ve seen this type of technology for patient recruitment and it is something more sites should take note of,” said Matt Kibby, principal at BBK Worldwide.
One of the upsides of automation is saving the time required to manually process data in busy clinical environments. “About 30% of the time involved in patient recruitment is spent reviewing patient profiles,” said Yizhao Ni, Ph.D., an instructor in the Division of Biomedical Informatics at Cincinnati Children’s Hospital Medical Center, where the algorithm was developed. “There is a critical need for automated methods to analyze influential factors on patients’ decision-making to support patient-directed precision recruitment.”
To that end, Ni and his colleagues recently published the results of their first major test of the algorithm in the Journal of the American Medical Informatics Association. For the study, the researchers evaluated their algorithm at Cincinnati Children’s Emergency Department—an urban trauma center with some 70,000 patient visits annually. From 2010 to 2012, the algorithm was used to make predictions about responses to 3,345 invites to 18 different clinical trials.
The computer analyzed more than 40 variables including patient race, pain score, time spent in the emergency department and the number of follow-up visits required by the trial. The computer’s forecasts were compared with a “random-response-prediction program” developed to simulate the department’s current manual, per-patient-visit recruitment method.
Compared to the simulation program, the machine learning algorithms were significantly better at predicting a patient’s response to an enrollment invitation. Only 60% of the patients chosen the traditional way accepted, versus about 70% of those identified via the algorithm. “Further refinements are still required to improve algorithm accuracy,” said Ni. With more refinements, their goal is to predict acceptance rates with even more precision. Although the algorithm was tested with pediatric patients, it is usable for adult populations, according to Ni.
“This is a great approach to validating what we intuitively know about patient recruitment,” said Kibby. The algorithm holds promise to use big data and computer technologies to significantly accelerate recruitment.
Although promising, the growing use of big data and machine learning has sparked widespread concern about its potential for perpetuating and even introducing bias into the healthcare system, including the clinical trial process. Misgivings have reached the upper echelons of the White House.
“Our challenge is to support growth in the beneficial use of big data, while ensuring that it does not create unintended discriminatory consequences,” stated a new report from the Obama Administration’s Big Data Working Group called Big Data: A Report on Algorithmic Systems, Opportunity, and Civil Rights. “As improvements in the uses of big data and machine learning continue, it will remain important not to place too much reliance on these new systems without questioning and continuously testing the inputs and mechanics behind them and the results they produce. ‘Data fundamentalism’—the belief that numbers cannot lie and always represent objective truth—can present serious and obfuscated bias problems.”
But used judiciously, data-driven innovations may have the opposite effect. Efforts to measure and understand patient’s motivations for enrolling in—or passing on—clinical trials can be instrumental in reaching underserved and underrepresented patient populations, said Kibby.
“This is an unprecedented opportunity to roll back bias in healthcare,” echoed Joshua New, policy analyst for the Center for Data Innovation.
Historically, racial and ethnic minorities, as well as women, have been underrepresented in clinical trials. Hispanics, for example, represent approximately 16% of the U.S. population but only 1% of clinical trial participants, according to the Center for Data Innovation. Some drugs that have only been tested in men have even been pulled from the market due to unexpected side effects occurring in women.
“A potential solution to concerns about bias during the recruitment process is to focus on the influential variables behind the decision to accept or decline an invitation, and then overcome them,” said Ni. For example, if the algorithm suggests that African American patients are likely to steer clear of a particular type of trial, researchers can work to understand the barriers and find better ways to engage with the patients.
“Big data does not create the bias, but it can reveal bias that has always been there thanks to humans running the show,” said New. “With innovations like machine learning algorithms, we have an unprecedented ability to measure and scrutinize processes such as clinical trials, and figure out ways to combat this bias.”
This article was reprinted from Volume 20, Issue 19, of CWWeekly, a leading clinical research industry newsletter providing expanded analysis on breaking news, study leads, trial results and more. Subscribe »