Friday, March 25, 2011

Mind the Gap

Modern clinical trials in the pharmaceutical industry are monuments of rigorous analysis. Trial designs are critically examined and debated extensively during the planning phase – we strain to locate possible sources of bias in advance, and adjust to minimize or compensate for them. We collect enormous quantities of efficacy and safety data using standardized, pre-validated techniques. Finally, a team of statisticians parses the data (adhering, of course, to an already-set-in-stone Statistical Analysis Plan to avoid the perils of post-hoc analysis) … then we turn both the data and the analysis over in their entirety to regulatory authorities, who in turn do all they can to verify that the results are accurate, correctly interpreted, and clinically relevant.

It is ironic, then, that our management of these trials is so casual and driven by personal opinions. We all like to talk a good game about metrics, but after the conversation we lapse back into our old, distinctly un-rigorous, habits. Example of this are everywhere once you start to look for them: one that just caught my eye is from a recent Center Watch article:

Survey: Large sites winning more trials than small

Are large sites—hospitals, academic medical centers—getting all the trials, while smaller sites continue to fight for the work that’s left over?

That’s what results of a recent survey by Clinical Research Site Training (CRST) seem to indicate. The nearly 20-year-old site-training firm surveyed 500 U.S. sites in December 2010, finding that 66% of large sites say they have won more trials in the last three years. Smaller sites weren’t asked specifically, but anecdotally many small and medium-sized sites reported fewer trials in recent years.
Let me repeat that last part again, with emphasis: “Smaller sites weren’t asked specifically, but anecdotally…

At this point, the conversation should stop. Nothing to see here, folks -- we don’t actually have evidence of anything, only a survey data point juxtaposed with someone’s personal impression -- move along.

So what are we to do then? I think there are two clear areas where we collectively need to improve:

1. Know what we don’t know. The sad and simple fact is that there are a lot of things we just don’t have good data on. We need to resist the urge to grasp at straws to fill those knowledge gaps – it leaves the false impression that we’ve learned something.

2. Learn from our own backyard. As I mentioned earlier, good analytic practices are pervasive on the executional side of trials. We need to think more rigorously about our data needs, earlier in the process.

The good news is that we have everything we need to make this a reality – we just need to have a bit of courage to admit the gap (or, occasionally, chasm) of our ignorance on a number of critical issues and develop a thoughtful plan forward.

Thursday, March 24, 2011

People Who Disagree with Me Tend to End Up Being Investigated by the Federal Government

I don’t think this qualifies yet as a trend, but two disturbing announcements came right back to back last week:

First: As you’ve probably heard, KV Pharmaceutical caused quite a stir when they announced the pricing for their old-yet-new drug Makena. In response, Senators Sherrod Brown (D-OH) and Amy Klobuchar (D-MN) sent a letter to the FTC demanding they “initiate a formal investigation into any potential anticompetitive conduct” by KV. In explaining his call for the investigation, Brown notes:

Since KV Pharmaceuticals announced the intended price hike, I called on KV Pharmaceuticals to immediately reconsider their decision, but to this date the company continues to defend this astronomical price increase.

Second: One week after an FDA Advisory Committee voted 13 to 4 to recommend approving Novartis’s COPD drug indacaterol, Public Citizen wrote a letter to the US Office of Human Research Protections requesting the Novartis be investigated for conducting the very trials that supplied the evidence for that vote. The reason? Despite the fact that the FDA requested the trials be placebo controlled, Public Citizen feels that Novartis should not have allowed patients to be on placebo. The letter shows no apparent consideration for the idea that a large number of thoughtful, well-informed people considered the design of these trials and came to the conclusion that they were ethical (not only the FDA, but the independent Institutional Review Boards and Ethics Committees that oversaw each trial). Instead, Public Citizen blithely “look[s] forward to OHRP’s thorough and careful investigation of our allegations.”

The upshot of these two announcements seems to be: “we don’t like what you’re doing, and since we can’t get you to stop, we’ll try to initiate a federal investigation.” Even if neither of these efforts succeed they will still cause the companies involved to spend a significant amount of time and money defending themselves. In fact, maybe that’s the point: neither effort seems like a serious claim that actual laws were broken, but rather just an attempt at intimidation.

Tuesday, March 22, 2011

Go Green, Recycle your Patients

Euthymics, a small Massachusetts-based biotech, recently announced the start of the TRIADE trial, which they describe as “Phase 2b/3a”. I am guessing that that somewhat-rare designation means they’re hoping that this will count as a pivotal trial, but have not yet had formal agreement from FDA on that topic. Part of this may be due to the trial’s design – per the press release, they’re using a Sequential Parallel Comparison Design (SPCD).

This is an intriguing trial design because it takes one of the benefits of traditional crossover designs – increasing statistical power by “reusing” patients in multiple treatments – while avoiding many of the problems, most notably any concerns about the persistence of treatment effect. This is because only a select but key subset of patients – those who were in the control arm but showed no response – are re-randomized to both arms. This group clearly has no treatment effect to persist, so they make an excellent population to further test with. (It’s important to note that all patients are continued on treatment in order to preserve blinding.)

In essence, we have a placebo run-in phase embedded within a traditional trial. It seems worth asking how this trial design compares against a simpler trial that includes such a run-in – I do not see any information on the website to help answer that.

And that points to the major drawback of the SPCD: it’s patented, and therefore not freely available to study and use. As far as I can tell, the design has not been through an FDA Special Protocol Assessment yet, which would certainly be a critical rite of passage towards greater acceptance. While I can appreciate the inventors’ desire to be rewarded for their creative breakthrough in devising the SPCD (and wish them nothing but good fortune for it), it appears that keeping the design proprietary may slow down efforts to validate and promote its use.