Showing posts with label Friday Flailing. Show all posts
Showing posts with label Friday Flailing. Show all posts

Friday, February 8, 2013

The FDA’s Magic Meeting


Can you shed three years of pipeline flab with this one simple trick?

"There’s no trick to it ... it’s just a simple trick!" -Brad Goodman

Getting a drug to market is hard. It is hard in every way a thing can be hard: it takes a long time, it's expensive, it involves a process that is opaque and frustrating, and failure is a much more likely outcome than success. Boston pioneers pointing their wagons west in 1820 had far better prospects for seeing the Pacific Ocean than a new drug, freshly launched into human trials, will ever have for earning a single dollar in sales.

Exact numbers are hard to come by, but the semi-official industry estimates are: about 6-8 years, a couple billion dollars, and more than 80% chance of ultimate failure.

Is there a secret handshake? Should we bring doughnuts?
(We should probably bring doughnuts.)
Finding ways to reduce any of those numbers is one of the premier obsessions of the pharma R&D world. We explore new technologies and standards, consider moving our trials to sites in other countries, consider skipping the sites altogether and going straight to the patient, and hire patient recruitment firms* to speed up trial enrollment. We even invent words to describe our latest and awesomest attempts at making development faster, better, and cheaper.

But perhaps all we needed was another meeting.

A recent blog post from Anne Pariser, an Associate Director at FDA's Center for Drug Evaluation and Research suggests that attending a pre-IND meeting can shave a whopping 3 years off your clinical development timeline:
For instance, for all new drugs approved between 2010 and 2012, the average clinical development time was more than 3 years faster when a pre-IND meeting was held than it was for drugs approved without a pre-IND meeting. 
For orphan drugs used to treat rare diseases, the development time for products with a pre-IND meeting was 6 years shorter on average or about half of what it was for those orphan drugs that did not have such a meeting.
That's it? A meeting? Cancel the massive CTMS integration – all we need are a couple tickets to DC?

Pariser's post appears to be an extension of an FDA presentation made at a joint NORD/DIA meeting last October. As far as I can tell, that presentation's not public, but it was covered by the Pink Sheet's Derrick Gingery on November 1.  That presentation covered just 2010 and 2011, and actually showed a 5 year benefit for drugs with pre-IND meetings (Pariser references 2010-2012).

Consider the fact that one VC-funded vendor** was recently spotted aggressively hyping the fact that its software reduced one trial’s timeline by 6 weeks. And here the FDA is telling us that a single sit-down saves an additional 150 weeks.

In addition, a second meeting – the End of Phase II meeting – saves another year, according to the NORD presentation.  Pariser does not include EOP2 data in her blog post.

So, time to charter a bus, load up the clinical and regulatory teams, and hit the road to Silver Spring?

Well, maybe. It probably couldn't hurt, and I'm sure it would be a great bonding experience, but there are some reasons to not take the numbers at face value.
  • We’re dealing with really small numbers here. The NORD presentation covers 54 drugs, and Pariser's appears to add 39 to that total. The fact that the time-savings data shifted so dramatically – from 5 years to 3 – tips us off to the fact that we probably have a lot of variance in the data. We also have no idea how many pre-IND meetings there were, so we don't know the relative sizes of the comparison groups.
  • It's a survivor-only data set. It doesn't include drugs that were terminated or rejected. FDA would never approve a clinical trial that only looked at patients who responded, then retroactively determined differences between them.  That approach is clearly susceptible to survivorship bias.
  • It reports means. This is especially a problem given the small numbers being studied. It's entirely plausible that just one or two drugs that took a really long time are badly skewing the results. Medians with quartile ranges would have been a lot more enlightening here.
All of the above make me question how big an impact this one meeting can really have. I'm sure it's a good thing, but it can't be quite this amazing, can it?

However, it would be great to see more of these metrics, produced in more detail, by the FDA. The agency does a pretty good job of reporting on its own performance – the PDUFA performance reports are a worthwhile read – but it doesn't publish much in the way of sponsor metrics. Given the constant clamor for new pathways and concessions from the FDA, it would be truly enlightening to see how well the industry is actually taking advantage of the tools it currently has.

As Gingery wrote in his article, "Data showing that the existing FDA processes, if used, can reduce development time is interesting given the strong effort by industry to create new methods to streamline the approval process." Gingery also notes that two new official sponsor-FDA meeting points have been added in the recently-passed FDASIA, so it would seem extremely worthwhile to have some ongoing, rigorous measurement of the usage of, and benefit from, these meetings.

Of course, even if these meetings are strongly associated with faster pipeline times, don’t be so sure that simply adding the meeting will cut your development so dramatically. Goodhart's Law tells us that performance metrics, when turned into targets, have a tendency to fail: in this case, whatever it was about the drug, or the drug company leadership, that prevented the meeting from happening in the first place may still prove to be the real factor in the delay.

I suppose the ultimate lesson here might be: If your drug doesn't have a pre-IND meeting because your executive management has the hubris to believe it doesn't need FDA input, then you probably need new executives more than you need a meeting.

[Image: Meeting pictured may not contain actual magic. Photo from FDA's Flikr stream.]

*  Disclosure: the author works for one of those.
** Under the theory that there is no such thing as bad publicity, no link will be provided.



Friday, January 25, 2013

Less than Jaw-Dropping: Half of Sites Are Below Average


Last week, the Tufts Center for the Study of Drug Development unleashed the latest in their occasional series of dire pronouncements about the state of pharmaceutical clinical trials.

One particular factoid from the CSDD "study" caught my attention:
Shocking performance stat:
57% of these racers won't medal!
* 11% of sites in a given trial typically fail to enroll a single patient, 37% under-enroll, 39% meet their enrollment targets, and 13% exceed their targets.
Many industry reporters uncritically recycled those numbers. Pharmalot noted:
Now, the bad news – 48 percent of the trial sites miss enrollment targets and study timelines often slip, causing extensions that are nearly double the original duration in order to meeting enrollment levels for all therapeutic areas.
(Fierce Biotech and Pharma Times also picked up the same themes and quotes from the Tufts PR.)

There are two serious problems with the data as reported.

One: no one – neither CSDD nor the journalists who loyally recycle its press releases – seem to remember this CSDD release from less than two years ago. It made the even-direr claim that
According to Tufts CSDD, two-thirds of investigative sites fail to meet the patient enrollment requirements for a given clinical trial.
If you believe both Tufts numbers, then it would appear that the number of under-performing sites has dropped almost 20% in just 20 months – from 67% in April 2011 to 48% in January 2013. For an industry as hidebound and slow-moving as drug development, this ought to be hailed as a startling and amazing improvement!

Maybe at the end of the day, 48% isn't a great number, but surely this would appear to indicate we're on the right track, right? Why would no one mention this?

Which leads me to problem two: I suspect that no one is connecting the 2 data points because no one is sure what it is we're even supposed to be measuring here.

In a clinical trial, a site's "enrollment target" is not an objectively-defined number. Different sponsors will have different ways of setting targets – in fact, the method for setting targets may vary from team to team within a single pharma company.

The simplest way to set a target is to divide the total number of expected patients by the number of sites. If you have 50 sites and want to enroll 500 patients, then viola ... everyone's got a "target" of 10 patients! But then as soon as some sites start exceeding their target, others will, by definition, fall short. That’s not necessarily a sign of underperformance – in fact, if a trial finishes enrollment dramatically ahead of schedule, there will almost certainly be a large number of "under target" sites.

Some sponsors and CROs get tricky about setting individual targets for each site. How do they set those? The short answer is: pretty arbitrarily. Targets are only partially based upon data from previous, similar (but not identical) trials, but are also shifted up or down by the (real or perceived) commercial urgency of the trial. They can also be influenced by a variety of subjective beliefs about the study protocol and an individual study manager's guesses about how the sites will perform.

If a trial ends with 0% of sites meeting their targets, the next trial in that indication will have a lower, more achievable target. The same will happen in the other direction: too-easy targets will be ratcheted up. The benchmark will jump around quite a bit over time.

As a result, "Percentage of trial sites meeting enrollment target" is, to put it bluntly, completely worthless as an aggregate performance metric. Not only will it change greatly based upon which set  of sponsors and studies you happen to look at, but even data from the same sponsors will wobble heavily over time.

Why does this matter?

There is a consensus that clinical development is much too slow -- we need to be striving to shorten clinical trial timelines and get drugs to market sooner. If we are going to make any headway in this effort, we need to accurately assess the forces that help or hinder the pace of development, and we absolutely must rigorously benchmark and test our work. The adoption of, and attention paid to unhelpful metrics will only confuse and delay our effort to improve the quality of speed of drug development.

[Photo of "underperforming" swimmers courtesy Boston Public Library on flikr.]

Friday, July 13, 2012

Friday Flailing: Medical Gasses and the Law of the Excluded Middle

Buried
Aristotle never actually said "principium tertii
exclusi"
, mostly because he didn't speak Latin.
under the mass of attention and conversations surrounding the ACA last week, the FDA Safety and Innovation Act (FDASIA) contained a number of provisions that will have lasting effects in the pharmaceutical industry.  What little notice it did get tended to focus on PDUFA reauthorization and the establishment of fees for new generic drug applications (GDUFA).

(Tangent: other parts of the act are well worth looking into: NORD is happy about new incentives for rare disease research, the SWHR is happy about expanded reporting on sex and race in clinical trials, and antibiotic drug makers are happy about extra periods of market exclusivity.  A very good summary is available on the FDA Law Blog.)

So no one’s paid any attention to the Medical Gasses Safety Act, which formally defines medical gasses and even gives them their own Advisory Committee and user fees (I guess those will be MGUFAs?)

The Act’s opening definition is a bit of an eyebrow-raiser:
(2) The term ‘medical gas’ means a drug that is--
‘(A) manufactured or stored in a liquefied, non-liquefied, or cryogenic state; and
‘(B) is administered as a gas.
I’m clearly missing something here, because as far as I can tell, everything is either liquefied or non-liquefied.   This doesn’t seem to lend a lot of clarity to the definition.  And then, what to make of the third option?  How can there be a third option?  It’s been years since my college logic class, but I still remember the Law of the Excluded Middle – everything is either P or not-P. 

I was going to send an inquiry through to Congressman Leonard Lance (R-NJ), the bill’s original author, but his website regrets to inform me that he is “unable to reply to any email from constituents outside of the district.”

So I will remain trapped in Logical Limbo.  Enjoy your weekend.