Showing posts with label Tufts CSDD. Show all posts
Showing posts with label Tufts CSDD. Show all posts

Saturday, March 18, 2017

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]

Friday, January 25, 2013

Less than Jaw-Dropping: Half of Sites Are Below Average


Last week, the Tufts Center for the Study of Drug Development unleashed the latest in their occasional series of dire pronouncements about the state of pharmaceutical clinical trials.

One particular factoid from the CSDD "study" caught my attention:
Shocking performance stat:
57% of these racers won't medal!
* 11% of sites in a given trial typically fail to enroll a single patient, 37% under-enroll, 39% meet their enrollment targets, and 13% exceed their targets.
Many industry reporters uncritically recycled those numbers. Pharmalot noted:
Now, the bad news – 48 percent of the trial sites miss enrollment targets and study timelines often slip, causing extensions that are nearly double the original duration in order to meeting enrollment levels for all therapeutic areas.
(Fierce Biotech and Pharma Times also picked up the same themes and quotes from the Tufts PR.)

There are two serious problems with the data as reported.

One: no one – neither CSDD nor the journalists who loyally recycle its press releases – seem to remember this CSDD release from less than two years ago. It made the even-direr claim that
According to Tufts CSDD, two-thirds of investigative sites fail to meet the patient enrollment requirements for a given clinical trial.
If you believe both Tufts numbers, then it would appear that the number of under-performing sites has dropped almost 20% in just 20 months – from 67% in April 2011 to 48% in January 2013. For an industry as hidebound and slow-moving as drug development, this ought to be hailed as a startling and amazing improvement!

Maybe at the end of the day, 48% isn't a great number, but surely this would appear to indicate we're on the right track, right? Why would no one mention this?

Which leads me to problem two: I suspect that no one is connecting the 2 data points because no one is sure what it is we're even supposed to be measuring here.

In a clinical trial, a site's "enrollment target" is not an objectively-defined number. Different sponsors will have different ways of setting targets – in fact, the method for setting targets may vary from team to team within a single pharma company.

The simplest way to set a target is to divide the total number of expected patients by the number of sites. If you have 50 sites and want to enroll 500 patients, then viola ... everyone's got a "target" of 10 patients! But then as soon as some sites start exceeding their target, others will, by definition, fall short. That’s not necessarily a sign of underperformance – in fact, if a trial finishes enrollment dramatically ahead of schedule, there will almost certainly be a large number of "under target" sites.

Some sponsors and CROs get tricky about setting individual targets for each site. How do they set those? The short answer is: pretty arbitrarily. Targets are only partially based upon data from previous, similar (but not identical) trials, but are also shifted up or down by the (real or perceived) commercial urgency of the trial. They can also be influenced by a variety of subjective beliefs about the study protocol and an individual study manager's guesses about how the sites will perform.

If a trial ends with 0% of sites meeting their targets, the next trial in that indication will have a lower, more achievable target. The same will happen in the other direction: too-easy targets will be ratcheted up. The benchmark will jump around quite a bit over time.

As a result, "Percentage of trial sites meeting enrollment target" is, to put it bluntly, completely worthless as an aggregate performance metric. Not only will it change greatly based upon which set  of sponsors and studies you happen to look at, but even data from the same sponsors will wobble heavily over time.

Why does this matter?

There is a consensus that clinical development is much too slow -- we need to be striving to shorten clinical trial timelines and get drugs to market sooner. If we are going to make any headway in this effort, we need to accurately assess the forces that help or hinder the pace of development, and we absolutely must rigorously benchmark and test our work. The adoption of, and attention paid to unhelpful metrics will only confuse and delay our effort to improve the quality of speed of drug development.

[Photo of "underperforming" swimmers courtesy Boston Public Library on flikr.]