Showing posts with label trial costs. Show all posts
Showing posts with label trial costs. Show all posts

Saturday, March 18, 2017

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]

Wednesday, February 22, 2017

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.
Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

ResearchBlogging.org Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a

Tuesday, September 25, 2012

What We Can Anticipate from TransCelerate


TransCelerate: Pharma's great kumbaya moment?
Last week, 10 of the largest pharmaceutical companies caused quite a hullaballoo in the research world with their announcement that they were anteing up to form a new nonprofit entity “to identify and solve common drug development challenges with the end goals of improving the quality of clinical studies and bringing new medicines to patients faster”. The somewhat-awkwardly-named TransCelerate BioPharma immediately got an enthusiastic reception from industry watchers and participants, mainly due to the perception that it was well poised to attack some of the systemic causes of delays and cost overruns that plague clinical trials today.

I myself was caught up in the breathless excitement of the moment, immediately tweeting after reading the initial report:

 Over the past few days, though, I've had time to re-read and think more about the launch announcement, and dial down my enthusiasm considerably.  I still think it’s a worthwhile effort, but it’s probably not fair to expect anything that fundamentally changes much in the way of current trial execution.

Mostly, I’m surprised by the specific goals selected, which seem for the most part either tangential to the real issues in modern drug development or stepping into areas where an all-big-pharma committee isn’t the best tool for the job. I’m also very concerned that a consortium like this would launch without a clearly-articulated vision of how it fits in with, and adds to, the ongoing work of other key players – the press release is loaded with positive, but extremely vague, wording about how TransCelerate will work with, but be different from, groups such as the CTTI and CDISC. The new organization also appears to have no formal relationship with any CRO organizations.  Given the crucial and deeply embedded nature of CROs in today’s research, this is not a detail to be worked out later; it is a vital necessity if any worthwhile progress is to be made.

Regarding the group’s goals, here is what their PR had to say:
Five projects have been selected by the group for funding and development, including: development of a shared user interface for investigator site portals, mutual recognition of study site qualification and training, development of risk-based site monitoring approach and standards, development of clinical data standards, and establishment of a comparator drug supply model.
Let’s take these five projects one by one, to try to get a better picture of TransCelerate’s potential impact:

1. Development of a shared user interface for investigator site portals

Depending on how it’s implemented, the impact of this could range from “mildly useful” to “mildly irksome”. Sure, I hear investigators and coordinators complain frequently about all the different accounts they have to keep track of, so having a single front door to multiple sponsor sites would be a relief. However, I don’t think that the problem of too many usernames cracks anyone’s “top 20 things wrong with clinical trial execution” list – it’s a trivial detail. Aggravating, but trivial.

Worse, if you do it wrong and develop a clunky interface, you’ll get a lot more grumbling about making life harder at the research site. And I think there’s a high risk of that, given that this is in effect software development by committee – and the committee is a bunch of companies that do not actually specialize in software development.

In reality, the best answer to this is probably a lot simpler than we imagine: if we had a neutral, independent body (such as the ACRP) set up a single sign-on (SSO) registry for investigators and coordinators, then all sponsors, CROs, and IVRS/IWRS/CDMS can simply set themselves up as service providers. (This works in the same way that many people today can log into disparate websites using their existing Google or Facebook accounts.)  TransCelerate might do better sponsoring and promoting an external standard than trying to develop an entirely new platform of its own.

2. Mutual recognition of study site qualification and training

This is an excellent step forward. It’s also squarely in the realm of “ideas so obvious we could have done them 10 years ago”. Forcing site personnel to attend multiple iterations of the same training seminars simply to ensure that you’ve collected enough binders full of completion certificates is a sad CYA exercise with no practical benefit to anyone.

This will hopefully re-establish some goodwill with investigators. However, it’s important to note that it’s pretty much a symbolic act in terms of efficiency and cost savings. Nothing wrong with that – heaven knows we need some relationship wins with our increasingly-disillusioned sites – but let’s not go crazy thinking that the represents a real cause of wasted time or money. In fact, it’s pretty clear that one of the reasons we’ve lived with the current site-unfriendly system for so long is that it didn’t really cost us anything to do so.

(It’s also worth pointing out that more than a few biotechs have already figured out, usually with CRO help, how to ensure that site personnel are properly trained and qualified without subjecting them to additional rounds of training.)

3. Development of risk-based site monitoring approach and standards

The consensus belief and hope is that risk-based monitoring is the future of clinical trials. Ever since FDA’s draft guidance on the topic hit the street last year, it’s been front and center at every industry event. It will, unquestionably, lead to cost savings (although some of those savings will hopefully be reinvested into more extensive centralized monitoring).  It will not necessarily shave a significant amount of time off the trials, since in many trials getting monitors out to sites to do SDV is not a rate-limiting factor, but it should still at the very least result in better data at lower cost, and that’s clearly a good thing.

So, the big question for me is: if we’re all moving in this direction already, do we need a new, pharma-only consortium to develop an “approach” to risk-based monitoring?

 First and foremost, this is a senseless conversation to have without the active involvement and leadership of CROs: in many cases, they understand the front-line issues in data verification and management far better than their pharma clients.  The fact that TransCelerate launched without a clear relationship with CROs and database management vendors is a troubling sign that it isn’t poised to make a true contribution to this area.

In a worst-case scenario, TransCelerate may actually delay adoption of risk-based monitoring among its member companies, as they may decide to hold off on implementation until standards have been drafted, circulated, vetted, re-drafted, and (presumably, eventually) approved by all 10 companies. And it will probably turn out that the approaches used will need to vary by patient risk and therapeutic area anyway, making a common, generic approach less than useful.

Finally, the notion that monitoring approaches require some kind of industry-wide “standardization” is extremely debatable. Normally, we work to standardize processes when we run into a lot of practical interoperability issues – that’s why we all have the same electric outlets in our homes, but not necessarily the same AC adaptors for our small devices.  It would be nice if all cell phone manufacturers could agree on a common standard plug, but the total savings from that standard would be small compared to the costs of defining and implementing it.  That’s the same with monitoring: each sponsor and each CRO have a slightly different flavor of monitoring, but the costs of adapting to any one approach for any given trial are really quite small.

Risk-based monitoring is great. If TransCelerate gets some of the credit for its eventual adoption, that’s fine, but I think the adoption is happening anyway, and TransCelerate may not be much help in reality.

4. Development of clinical data standards

This is by far the most baffling inclusion in this list. What happened to CDISC? What is CDISC not doing right that TransCelerate could possibly improve?

In an interview with Matthew Herper at Forbes, TransCelerate’s Interim CEO expands a bit on this point:
“Why do some [companies] record that male is a 0 and female is a 1, and others use 1 and 0, and others use M and F. Where is there any competitive advantage to doing that?” says Neil. “We do 38% of the clinical trials but 70% of the [spending on them]. IF we were to come together and try to define some of these standards it would be an enabler for efficiencies for everyone.”
It’s really worth noting that the first part of that quote has nothing to do with the second part. If I could wave a magic wand and instantly standardize all companies’ gender reporting, I would not have reduced clinical trial expenditures by 0.01%. Even if we extend this to lots of other data elements, we’re still not talking about a significant source of costs or time.

Here’s another way of looking at it: those companies that are conducting the other 62% of trials but are only responsible for 30% of the spending – how did they do it, since they certainly haven’t gotten together to agree on a standard format for gender coding?

But the main problem here is that TransCelerate is encroaching on the work of a respected, popular, and useful initiative – CDISC – without clearly explaining how it will complement and assist that initiative. Neil’s quote almost seems to suggest that he plans on supplanting CDISC altogether.  I don’t think that was the intent, but there’s no rational reason to expect TransCelerate to offer substantive improvement in this area, either.

5. Establishment of a comparator drug supply model

This is an area that I don’t have much direct experience in, so it’s difficult to estimate what impact TransCelerate will have. I can say, anecdotally, that over the past 10 years, exactly zero clinical trials I’ve been involved with have had significant issues with comparator drug supply. But, admittedly, that’s quite possibly a very unrepresentative sample of pharmaceutical clinical trials.

I would certainly be curious to hear some opinions about this project. I assume it’s a somewhat larger problem in Europe than in the US, given both their multiple jurisdictions and their stronger aversion to placebo control. I really can’t imagine that inefficiencies in acquiring comparator drugs (most of which are generic, and so not directly produced by TransCelerate’s members) represent a major opportunity to save time and money.

Conclusion

It’s important to note that everything above is based on very limited information at this point. The transcelerate.com website is still “under construction”, so I am only reacting to the press release and accompanying quotes. However, it is difficult to imagine at this point that TransCelerate’s current agenda will have more than an extremely modest impact on current clinical trials.  At best, it appears that it may identify some areas to cut some costs, though this is mostly through the adoption of risk-based monitoring, which should happen whether TransCelerate exists or not.

I’ll remain a fan of TransCelerate, and will follow its progress with great interest in the hopes that it outperforms my expectations. However, it would do us all well to recognize that TransCelerate probably isn’t going to change things very dramatically -- the many systemic problems that add to the time and cost of clinical trials today will still be with us, and we need to continue to work hard to find better paths forward.

[Update 10-Oct-2012: Wayne Kubick, the CTO of CDISC, has posted a response with some additional details around cooperation between TransCelerate and CDISC around point 4 above.]

Mayday! Mayday! Photo credit: "Wheatley Maypole Dance 2008" from flikr user net_efekt.