NIH official sees need to evaluate effectiveness of IRBs in protecting study volunteers
Institutional Review Boards (IRBs) are central to the system of protecting human subjects who participate in clinical research, yet concerns have been raised that there is no way to measure whether IRBs actually do their job of protecting study volunteers from unnecessary risk of harm.
Christine Grady, R.N., Ph.D., acting chief of the National Institutes of Health (NIH) Clinical Center’s Department of Bioethics, said “serious efforts” are needed to determine the effectiveness of the IRB system in protecting research participants. Grady has called for regulatory guidance to help clarify the goals and responsibilities of IRBs and for the establishment of an objective system to measure whether IRBs meet those goals.
Grady made her appeal in a commentary published last month in the Journal of the American Medical Association (JAMA). In the article, Grady noted that IRBs are often attacked for using inconsistent and ineffective practices and for focusing on paperwork and bureaucratic compliance rather than protecting research participants. Yet she said there are no published studies that evaluate how effective IRBs are in protecting clinical trial subjects and, without this data, it’s unclear whether IRBs meet their goals.
“I decided, with a colleague, to do a systematic review of the empirical literature that has evaluated IRBs,” Grady said. “We were struck by the fact that even though there were 45 studies during the past 20 to 30 years, none even attempted to measure how effective IRBs were in meeting their goal of protecting human subjects.”
IRBs originally were created to safeguard the rights of trial subjects and to ensure the ethical conduct of research. Yet, as the clinical research industry has grown into a complex, global venture, the scope of responsibilities for IRBs also has increased. At the same time, each IRB has developed its own practices for reviewing research. For example, in addition to compliance with human subjects regulations, some IRBs also are responsible for scientific review, conflict-of-interest information on investigators and privacy regulations. “There are a lot of things that have crept into the realm of IRBs,” Grady said. “I think it would be very helpful to have some clarification on what the IRB is supposed to do.”
Grady said the lack of data evaluating the effectiveness of IRBs is “complicated” by the fact that there are no agreed-on metrics for measuring the success of these boards. For example, she said protecting patients from unnecessary or excessive risk is an important measure of IRB effectiveness, yet there is no system-wide collection of data on research risks or procedures for aggregating risks across studies. “The IRB that I sit on spends a considerable amount of time at every meeting talking about adverse events that have been reported, but there isn’t any central way to put those all together,” Grady said. “We don’t really know the kinds of risks that people experience in research across the board. We don’t know whether or not IRBs are reducing those risks in anyway.”
The JAMA commentary has received widespread support from IRBs and others in the research community. Raffaella Hart, director of Biomedical Research Alliance of New York (BRANY) IRB, said many IRB professionals agree that empirical evidence might help IRB review become more meaningful and eliminate tasks that are not adding to the protection of human subjects.
“Those involved in IRB work recognize that the current system is not a one-size-fits-all solution for overseeing all types of research, but the current system was formed in large part as a response to research atrocities that occurred when no oversight mechanism was in place,” said Hart. “The movement to accredit IRBs speaks to the fact that the IRB community recognizes the need for more consistency in IRB review processes, and while consistency is not a direct measure of IRB effectiveness, at least it levels the playing field.”