The Tufts Center for the Study of Drug Development (CSDD) sparked a public debate in 2014 with its $2.6 billion estimate of the cost to develop a new drug. Tufts CSDD now has published a peer-reviewed article in the Journal of Health Economics that defends the study’s methodology and details the various components of the drug development processes included in its calculation of overall R&D costs.
The study, which used the same methodology as with earlier R&D cost assessments, found that the cost to develop and gain marketing approval for a new drug has increased at an annual rate of 8.5% since 2003. When adding post-approval R&D costs, which include long-term safety monitoring, the full life-cycle cost of an approved drug increased to $2.9 billion.
The article, available online, contains more detailed analysis and discussion of the study than was provided in November 2014 when results of the study were first announced, and includes a 55-page supplement with additional information on secondary results, sensitivity analysis and validation methods.
Assumptions used to calculate the R&D cost figures, however, have been met with skepticism by organizations including Doctors Without Borders/Médecins Sans Frontières and the patient advocacy group Union for Affordable Cancer Treatment (UACT). Both groups have publicly called the figures misleading and the methodology flawed. Yet Tufts researchers don’t expect the peer-reviewed article to change the viewpoints of these organizations and others who have condemned the study in the media.
The debate over the Tufts CSDD study results highlights the importance of drug development costs to the industry and the public. The analysis was based, in part, on information provided by 10 pharmaceutical companies on 106 randomly selected drugs that were first tested in human subjects from 1995 to 2007.
Much of the controversy around the Tufts CSDD’s $2.6 billion cost estimate for developing a new medicine has focused on concerns that drug companies will use the number to justify high prices for new drugs.
“The argument for high (drug) prices is that companies spend a lot on R&D,” said UACT member and cancer patient Manon Ress, Ph.D., who believes the Tufts CSDD study inflates R&D costs. “They should not be able to exaggerate what they spend and then make us pay super high prices on numbers that are not accurate or relevant for cancer drugs.”
However, John LaMattina, the former president of Pfizer Global R&D and now a contributor to Forbes, argues that the price of new drugs and R&D costs are unrelated. New drugs are priced based on the value they bring to the healthcare system. LaMattina said Gilead could initially charge $1,000-a-pill for its hepatitis C drug Sovaldi, for example, because it reduced the burden of the disease for patients and saved healthcare systems money.
“New drugs have to bring value. Period,” said LaMattina. “The [final] costs have little to do with the costs needed to bring these breakthrough medicines to patients. Unfortunately, the pharmaceutical industry’s reputation has been in the dumps for a while and people will automatically assume the worst and latch onto things that support their views.”
Among the other main points of controversy is the methodology, which takes into account time or opportunity costs for developing a new drug. The $2.6 billion figure for an approved drug is based on average out-of-pocket costs of $1.4 billion to bring a new drug to market and includes another $1.2 billion for the return on capital foregone by investors while a drug is in development. Some critics, including Harvard Medical School professor Jerry Avorn, M.D., argued the 10.5% capital cost assessment used in the study was too high. The Tufts CSDD authors said the discount rate represented funding requirements actually experienced by drug developers in the period analyzed. Other critics said time or opportunity costs shouldn’t be included in the R&D cost analysis at all.
“There are some people who just don’t get this notion of time or opportunity cost so they reject the whole exercise out of hand. They don’t want any number applied to it. But there is no rational economic basis for ignoring those kinds of costs,” said Joseph A. DiMasi, director of economic analysis at Tufts CSDD and principal investigator for the study.
Drug failures are another key contributor to the Tufts CSDD’s calculation of development costs. The study estimates that only 11.8% of drugs that enter clinical testing are approved, compared with 21.5% in the 2003 study. DiMasi said the figure was based on information publicly available from commercial pipeline databases and included nearly 1,500 molecules that met survey-inclusion criteria from a broad range of companies. The 88.2% failure rate for drugs that enter clinical testing is consistent with results from other studies.
Another criticism was that the Tufts CSDD study didn’t take into account money the National Institutes of Health and other organizations spend on basic and clinical research, which lead to targets sponsor companies can develop. DiMasi said the study was designed to measure only the amounts private developers actually spent on development and that linking scientific contributions from government-funded or nonprofit sources to specific new therapies would be difficult.
“That would be interesting to know, but it would be a daunting task,” said DiMasi.
DiMasi said the Tufts CSDD report focused its study on drug development costs, rather than drug prices. Although R&D costs, combined with the price environment, can help determine incentives for sponsor companies to invest in new drug innovation, he said the Tufts CSDD study didn’t connect R&D costs to the prices charged for individual drugs.
“When it comes down to the price of a specific product, past R&D costs are actually irrelevant because they are sunk costs,” said DiMasi. “Drug prices are determined by other factors such as the value that the new drug brings to patients and payers, the competitive landscape and what policies or reimbursement strategies they face with respect to payers.
“Economists haven’t criticized this study,” said DiMasi. “The methodology has been well-vetted a number of times in high-quality, peer-reviewed journals. But the prime-movers behind the criticisms won’t change their position.”
Karyn Korieth has been covering the clinical trials industry for CenterWatch since 2003. Her 30-year journalism career includes work in local news, the healthcare industry and national magazines. Karyn holds a Master’s of Science degree from the Columbia University Graduate School of Journalism. Email firstname.lastname@example.org.
This article was reprinted from Volume 20, Issue 11, of CWWeekly, a leading clinical research industry newsletter providing expanded analysis on breaking news, study leads, trial results and more. Subscribe »