With declining federal research budgets, those who typically raise money from outside sources to pay for clinical research have an alternative: Charge patients to participate in clinical trials that typically would not get funded.
A group of bioethicists, however, recently studied the idea and overwhelmingly agreed that patient-funded trials may do more harm than good for various reasons. They concluded that instead of increasing the pace of biomedical progress, it may delay innovation through the diversion of resources. Moreover, it may hurt the very people it was aimed to benefit.
Those are the findings by Ezekiel J. Emanuel, M.D., Ph.D., the former White House policy advisor who now chairs the Department of Medical Ethics and Health Policy at the Perelman School of Medicine at the University of Pennsylvania. Writing in Science Translational Medicine, he was the lead author, along with four bioethicists as co-authors, who outlined the arguments for and against the concept of “pay-to-play” research and concluded that self-financing participation compromises the overall integrity of clinical research.
Their paper was prompted by a call from a group of academic investigators asking Emanuel to review the legality and ethics of charging for enrollment in early-phase clinical trials. They were reportedly frustrated over shrinking budgets and wanted to charge research participants as a way to fund research that otherwise wouldn’t move forward. The strongest argument for having research participants directly fund the project—to have them “pay to play”—is that the study would not move forward without their financial support, and patients should have the freedom to do whatever they want with their money as long as it does not harm others.
The authors, however, cited several concerns. Among them:
Emanuel and his team’s views were shared by others, including another ethicist who cited how “pay to play” threatens clinical trial design, fair subject selection, and scientific rigor, fairness and relevance.
“I just don’t think people will find this idea of patients funding their own trials as attractive, in part because the idea of financial incentives for vulnerable, sick people does not make much sense—it is morally suspect,” said Arthur Caplan, Ph.D., director of the Division of Medical Ethics and Department of Population Health at New York University’s Langone Medical Center. “Research is about knowledge. People jump into these kinds of studies thinking of getting a cure, which never happens. It also reinforces the therapeutic misconception that they are getting a proven treatment.”
Caplan and others have acknowledged that one possible exception is when Internal Review Boards (IRBs) are involved and all the risks, payments, protocols, adverse events and required binding signatures are involved.
The authors acknowledge that in pay-to-play arrangements, the IRB would have to determine whether the potential direct benefits and knowledge gains from the research justify the financial loss to participants that are associated with enrolling in the trial, as well as whether any risks would be justified if the study might not be able to recruit an adequate number of paying participants and therefore must be abandoned.
“[Emanuel and his team of ethicists] made some interesting points in their discussion about IRB concerns and IRB reviews,” said Lindsay McNair, M.D., chief medical officer and president of consulting services at the WIRB-Copernicus Group (WCG). “They captured what an IRB looks for, but [WCG is] wary of seeing rules about what is always acceptable, as clinical trials can vary. They captured what the IRB looks like, covering a lot of the key issues and the importance of that IRBs, like us, look at each study individually and carefully assess the pro and cons.”
Email comments to Ronald at email@example.com. Follow @RonRCW
This article was reprinted from Volume 19, Issue 32, of CWWeekly, a leading clinical research industry newsletter providing expanded analysis on breaking news, study leads, trial results and more. Subscribe »