Friday, March 25, 2011

Mind the Gap

Modern clinical trials in the pharmaceutical industry are monuments of rigorous analysis. Trial designs are critically examined and debated extensively during the planning phase – we strain to locate possible sources of bias in advance, and adjust to minimize or compensate for them. We collect enormous quantities of efficacy and safety data using standardized, pre-validated techniques. Finally, a team of statisticians parses the data (adhering, of course, to an already-set-in-stone Statistical Analysis Plan to avoid the perils of post-hoc analysis) … then we turn both the data and the analysis over in their entirety to regulatory authorities, who in turn do all they can to verify that the results are accurate, correctly interpreted, and clinically relevant.

It is ironic, then, that our management of these trials is so casual and driven by personal opinions. We all like to talk a good game about metrics, but after the conversation we lapse back into our old, distinctly un-rigorous, habits. Example of this are everywhere once you start to look for them: one that just caught my eye is from a recent Center Watch article:

Survey: Large sites winning more trials than small

Are large sites—hospitals, academic medical centers—getting all the trials, while smaller sites continue to fight for the work that’s left over?

That’s what results of a recent survey by Clinical Research Site Training (CRST) seem to indicate. The nearly 20-year-old site-training firm surveyed 500 U.S. sites in December 2010, finding that 66% of large sites say they have won more trials in the last three years. Smaller sites weren’t asked specifically, but anecdotally many small and medium-sized sites reported fewer trials in recent years.
Let me repeat that last part again, with emphasis: “Smaller sites weren’t asked specifically, but anecdotally…

At this point, the conversation should stop. Nothing to see here, folks -- we don’t actually have evidence of anything, only a survey data point juxtaposed with someone’s personal impression -- move along.

So what are we to do then? I think there are two clear areas where we collectively need to improve:

1. Know what we don’t know. The sad and simple fact is that there are a lot of things we just don’t have good data on. We need to resist the urge to grasp at straws to fill those knowledge gaps – it leaves the false impression that we’ve learned something.

2. Learn from our own backyard. As I mentioned earlier, good analytic practices are pervasive on the executional side of trials. We need to think more rigorously about our data needs, earlier in the process.

The good news is that we have everything we need to make this a reality – we just need to have a bit of courage to admit the gap (or, occasionally, chasm) of our ignorance on a number of critical issues and develop a thoughtful plan forward.

Thursday, March 24, 2011

People Who Disagree with Me Tend to End Up Being Investigated by the Federal Government

I don’t think this qualifies yet as a trend, but two disturbing announcements came right back to back last week:

First: As you’ve probably heard, KV Pharmaceutical caused quite a stir when they announced the pricing for their old-yet-new drug Makena. In response, Senators Sherrod Brown (D-OH) and Amy Klobuchar (D-MN) sent a letter to the FTC demanding they “initiate a formal investigation into any potential anticompetitive conduct” by KV. In explaining his call for the investigation, Brown notes:

Since KV Pharmaceuticals announced the intended price hike, I called on KV Pharmaceuticals to immediately reconsider their decision, but to this date the company continues to defend this astronomical price increase.

Second: One week after an FDA Advisory Committee voted 13 to 4 to recommend approving Novartis’s COPD drug indacaterol, Public Citizen wrote a letter to the US Office of Human Research Protections requesting the Novartis be investigated for conducting the very trials that supplied the evidence for that vote. The reason? Despite the fact that the FDA requested the trials be placebo controlled, Public Citizen feels that Novartis should not have allowed patients to be on placebo. The letter shows no apparent consideration for the idea that a large number of thoughtful, well-informed people considered the design of these trials and came to the conclusion that they were ethical (not only the FDA, but the independent Institutional Review Boards and Ethics Committees that oversaw each trial). Instead, Public Citizen blithely “look[s] forward to OHRP’s thorough and careful investigation of our allegations.”

The upshot of these two announcements seems to be: “we don’t like what you’re doing, and since we can’t get you to stop, we’ll try to initiate a federal investigation.” Even if neither of these efforts succeed they will still cause the companies involved to spend a significant amount of time and money defending themselves. In fact, maybe that’s the point: neither effort seems like a serious claim that actual laws were broken, but rather just an attempt at intimidation.

Tuesday, March 22, 2011

Go Green, Recycle your Patients

Euthymics, a small Massachusetts-based biotech, recently announced the start of the TRIADE trial, which they describe as “Phase 2b/3a”. I am guessing that that somewhat-rare designation means they’re hoping that this will count as a pivotal trial, but have not yet had formal agreement from FDA on that topic. Part of this may be due to the trial’s design – per the press release, they’re using a Sequential Parallel Comparison Design (SPCD).

This is an intriguing trial design because it takes one of the benefits of traditional crossover designs – increasing statistical power by “reusing” patients in multiple treatments – while avoiding many of the problems, most notably any concerns about the persistence of treatment effect. This is because only a select but key subset of patients – those who were in the control arm but showed no response – are re-randomized to both arms. This group clearly has no treatment effect to persist, so they make an excellent population to further test with. (It’s important to note that all patients are continued on treatment in order to preserve blinding.)

In essence, we have a placebo run-in phase embedded within a traditional trial. It seems worth asking how this trial design compares against a simpler trial that includes such a run-in – I do not see any information on the website to help answer that.

And that points to the major drawback of the SPCD: it’s patented, and therefore not freely available to study and use. As far as I can tell, the design has not been through an FDA Special Protocol Assessment yet, which would certainly be a critical rite of passage towards greater acceptance. While I can appreciate the inventors’ desire to be rewarded for their creative breakthrough in devising the SPCD (and wish them nothing but good fortune for it), it appears that keeping the design proprietary may slow down efforts to validate and promote its use.

Monday, March 21, 2011

From Russia with (3 to 20 times more) Love

Russia’s Clinical Trials are a Thriving Business”, trumpeted the news release that came to my inbox the other day. Inside was a rather startling – and ever-so-slightly odd – claim:
NPR Marketplace Health Desk Reporter Gregory Warner uncovers the truths about clinical trials in Russia; namely, the ability for biopharmaceutical companies to enroll patients 3 to 20 times faster than in the more established regions of North America and Western Europe.
Of course, as you might expect, the NPR reporter does not “uncover” that – rather, the 3 to 20 times faster “truth” is simply a verbatim statement from the CEO of ClinStar, a CRO specializing in running trials in Russia and Eastern Europe. There is no explanation of the 3-to-20 number, or why there is such a wide confidence interval (if that’s what that is).

The full NPR story goes on to hint that the business of Russian clinical trials may be a bit on the ethically cloudy side by associating it with past practices of lavishing gifts and attention on leading physicians (no direct tie is made – the reporter however not so subtly notes the fact that one person who used to work in Russia as a drug rep now works in clinical trials). I think the implication here is that Russia gets results by any means necessary, and the pharma industry is excitedly queuing up to get its trials done faster.

However, this speed factor is coupled with the extremely modest claim that clinical trial business in Russia is “growing at 15% a years.” While this is certainly not a bad rate of growth, it’s hardly explosive. It’s in fact comparable to the revenue growth of the overall CRO market for the few years preceding the current downturn, estimated at 12.2%, and dwarfed by the estimated 34% annual growth of the industry in India.

From my perspective, the industry seems very hesitant to put too many eggs in Eastern Europe’s basket just yet. We need faster trials, certainly, but we need reliable and clean data even more. Recent troubling research experience with Russia -- most notably the dimebon fiasco, where overwhelming positive data in Russian phase 2 trials have turned out to be completely irreproducible in larger western trials –has left the industry wary about the region. And wink-and-nod publicity about incredible speed gains probably will ultimately hurt wider acceptance of Eastern European trials more than it will help.

Sunday, March 20, 2011

1st-Person Accounts of Trial Participation

Two intriguing articles on participation in clinical trials were published this week. Both happen to be about breast cancer, but both touch squarely on some universal points:

ABC News features patient Haralee Weintraub, who has enrolled in 5 trials in the past 10 years. While she is unusual for having participated in so many studies, Weintraub’s offers great insights into the barriers and benefits of being in the trial, including the fact that many benefits – such as close follow-up and attention from the treatment team -- are not obvious at first.

Meanwhile, the New York Times’ recurring column from Dr Peter Bach on his wife’s breast cancer offers a moving description of her consent into a trial. His essay focuses mainly on the incremental, slow pace of cancer research (“this arduous slog”) and how it is both incredibly frustrating and absolutely necessary for long-term improvements in treatment.

Wednesday, March 16, 2011

Realistic Optimism in Clinical Trials

The concept of “unrealistic optimism” among clinical trial participants has gotten a fair bit of press lately, mostly due to a small study published in IRB: Ethics and Human Research. (I should stress the smallness of the study: it was a survey given to 72 blood cancer patients. This is worth noting in light of the slightly-bizarre Medscape headline that optimism “plagues” clinical trials.)

I was therefore happy to see this article reporting out of the Society for Surgical Oncology. In looking at breast cancer outcomes between surgical oncologists and general surgeons, the authors appear to have found that most of the beneficial outcomes among patients treated by surgical oncologist can be ascribed to clinical trial participation. Some major findings:
  • 56% of patients treated by a surgical oncologist participated in a trial, versus only 7% of those treated by a general surgeon
  • Clinical trial patients had significantly longer median follow-up than non-participants (44.6 months vs. 38.5 months)
  • Most importantly, clinical trial patients had significantly improved overall survival at 5 years than non-participants (31% vs. 26%)

Of course, the study reported on in the IRB article did not compare non-trial participants’ attitudes, so these aren’t necessarily contradictory results. However, I suspect that the message of “clinical trial participation” entails “better follow-up” entails “improved outcomes” will not get the same eye-catching headline in Medscape. Which is a shame, since we already have enough negative press about clinical trials out there.

Tuesday, March 1, 2011

What is the Optimal Rate of Clinical Trial Participation?

The authors of EDICT's white paper, in their executive summary, take a bleak view of the current state of clinical trial accrual:

Of critical concern is the fact that despite numerous years of discussion and the implementation of new federal and state policies, very few Americans actually take part in clinical trials, especially those at greatest risk for disease. Of the estimated 80,000 clinical trials that are conducted every year in the U.S., only 2.3 million Americans take part in these research studies -- or less than one percent of the entire U.S. population.
The paper goes on to discuss the underrepresentation of minority populations in clinical trials, and does not return to this point. And while it's certainly not central to the paper's thesis (in fact, in some ways it works against it), it is a perception that certainly appears to a common one among those involved in clinical research.

When we say that "only" 2.3 million Americans take part in clinical research, we rely directly on an assumption that more than 2.3 million Americans should take part.

This leads immediately to the question: how many more?

If we are trying to increase participation rates, the magnitude of the desired improvement is one of the first and most central facts we need. Do we want a 10% increase, or a 10-fold increase? The steps required to achieve these will be radically different, so it would seem important to know.

It should also be pointed out: in some very real sense, the ideal rate of clinical trial participation, at least for pre-marketing trials, is 0%. Participating in these trial by definition means being potentially exposed to a treatment that the FDA believes has insufficient evidence of safety and/or efficacy. In an ideal world, we would not expose any patient to that risk. Even in today's non-ideal world, we have already decided not to expose any patients to medication that have not produced some preliminary evidence of safety and efficacy in animals. That is, we have already established one threshold below which we believe human involvement is unacceptably risky -- in a better world, with more information, we would raise that threshold much higher than the current criteria for IND approval.

This is not just a hypothetical concern. Where we set our threshold for acceptable risk should drive much of our thinking about how much we want to encourage (or discourage) people from shouldering that risk. Landmine detection, for example, is a noble but risky profession: we may agree that it is acceptable for rational adults to choose to enter into that field, and we may certainly applaud their heroism. However, that does not mean that we will unanimously agree on how many adults should be urged to join their ranks, nor does it mean that we will not strive and hope for the day that no human is exposed to that risk.

So, we're not talking about the ideal rate of participation, we're talking about the optimal rate. How many people should get involved, given a) the risks involved in being exposed to investigational treatment, against b) the potential benefit to the participant and/or mankind? For how many will the expected potential benefit outweigh the expected total cost? I have not seen any systematic attempt to answer this question.

The first thing that should be obvious here is that the optimal rate of participation should vary based upon the severity of the disease and the available, approved medications to treat it. In nonserious conditions (eg, keratosis pilaris), and/or conditions with a very good recovery rate (eg, veisalgia), we should expect participation rates to be low, and in some cases close to zero in the absence of major potential benefit. Conversely, we should desire higher participation rates in fatal conditions with few if any legitimate treatment alternatives (eg, late-stage metastatic cancers). In fact, if we surveyed actual participation rates by disease severity and prognosis, I think we would find that this relationship generally holds true already.

I should qualify the above by noting that it really doesn't apply to a number of clinical trial designs, most notably observational trials and phase 1 studies in healthy volunteers. Of course, most of the discussion around clinical trial participation does not apply to these types of trials, either, as they are mostly focused on access to novel treatments.