Tuesday, June 28, 2011

DDMAC to weigh in on trial design?

The FDA Law Blog has an incredibly interesting entry regarding last week's Untitled Letter from the FDA's Division of Drug Marketing, Advertising, and Communications (DDMAC) to Novartis.

The letter, regarding a detail aid for Novartis's Focalin XR, accuses Novartis of making "unsubstantiated superiority claims" for Focalin XR in comparison to Concerta. What is interesting -- and completely new as far as I can tell -- is that the claims DDMAC are taking exception to are, in fact, primary endpoints of two controlled clinical trials:


Treatment for ADHD consists of symptom relief over an extended time period; thus, ADHD medications must control disease symptoms over the entire treatment course. However, the referenced clinical studies only focused on one specific time point (2 hours post-dose) as the primary efficacy measure in the treatment course of Focalin XR and Concerta. By focusing on the 2 hour post-dose time point, the studies did not account for the different pharmacokinetic profiles and subsequent efficacy profiles associated with Focalin XR and Concerta over the entire treatment course.

So, in essence, DDMAC appears to be taking exception to the trial design, not to Novartis's interpretation of the trial results. This would seem to be a dramatic change in scope.

I am not familiar with the trials in question -- I will post an update with more information shortly. Of special interest to me would be to understand: Were these pivotal trials that played a role in Focalin XR's approval? If so, did the FDA review them in a Special Protocol Assessment (and therefore are two distinct branches of FDA providing divergent opinions on these endpoints)?

Sunday, April 24, 2011

Social Networking for Clinical Research

No matter what, negative clinical trial results are sad. We can appreciate, intellectually, that clinical equipoise is important and that negative results are a natural consequence of conducting ethical trials, but it is impossible not to feel disappointed when yet another promising therapy fails to hold up.

However, the negative results published today in Nature Biotechnology on a groundbreaking trial in ALS deserve to be celebrated. The trial was conducted exclusively through PatientsLikeMe, the online medical social network that serves as a forum for patients in all disease areas to “share real-world health experiences.”

According to a very nice write-up in the Wall Street Journal, the trial was conceived and initiated by ALS patients who were part of the PatientsLikeMe ALS site:


Jamie Heywood, chairman and co-founder of PatientsLikeMe, said the idea for the new study came from patients. After the 2008 paper reporting lithium slowed down the disease in 16 ALS patients, some members of the site suggested posting their experiences with the drug in an online spreadsheet to figure out if it was working. PatientsLikeMe offered instead to run a more rigorous observational study with members of the network to increase chances of getting a valid result.
The study included standardized symptom reporting from 596 patients (149 taking lithium and 447 matched controls). After 9 months, the patients taking lithium showed almost no difference in ALS symptoms compared to their controls, and preliminary (negative) results were released in late 2008. Although the trial was not randomized and not blinded – significant methodological issues, to be sure – it is still exciting for a number of reasons.

First, the study was conducted at an incredible rate of speed. Only 9 months elapsed between PatientsLikeMe deploying its tool to users and the release of topline results. In contrast, 2 more traditional, controlled clinical trials that were initiated to verify the first study’s results had not even managed to enroll their first patient during that time. In many cases like this – especially looking at new uses of established, generic drugs – private industry has little incentive to conduct an expensive trial. And academic researchers tend to move a pace that, while not quite glacial, is not as rapid as acutely-suffering patients would like.

(The only concern I have about speed is the time it took to get this paper published. Why was there a 2+ year gap between results and publication?)

Second, this trial represents one of the best uses of “off-label” patient experience that I know of. Many of the physicians I talk to struggle with off-label, patient-initiated treatment: they cannot support it, but it is difficult to argue with a patient when there is so little hard evidence. This trial represents an intelligent path towards tapping into and systematically organizing some of the thousands of individual off-label experiences and producing something clinically useful. As the authors state in the Nature paper:


Positive results from phase 1 and phase 2 trials can lead to changes in patient behavior, particularly when a drug is readily available. [...] The ongoing availability of a surveillance mechanism such as ours might help provide evidence to support or refute self-experimentation.

Ironically, the fact that the trial found no benefit for lithium may have the most far-reaching benefit. A positive trial would have been open to criticism for its inability to compensate for placebo effect. These results run counter to expected placebo effect, lending strong support to the conclusion that it was thoughtfully designed and conducted. I hope this will be immense encouragement to others looking to take this method forward.

A lot has been written over the past 3-4 years about the enormous power of social media to change healthcare as we know it. In general, I have been skeptical of most of these claims, as most of them fail to plausibly explain the connection between "Lots of people on Facebook" and "Improved clinical outcomes". I applaud the patients and staff at PatientsLikeMe for finding a way to work together to break new ground in this area.

Monday, April 11, 2011

Accelerated Approvals are Too Fast, Except When They're Too Slow

A great article in Medscape reports on two unrelated articles on the FDA’s process for granting (and following up on) Accelerated Approvals of oncology drugs.

First, a very solid review of all oncology drugs approved through the accelerated process since 1992 is in the latest journal of the National Cancer Institute. The review, written by FDA personnel, is in general concerned with the slow pace of confirmatory Phase 3 trials – over a third (18 of 47) have not yet been completed, and even the ones that have completed have taken considerable time. The authors consider process changes and fines as viable means for the FDA to encourage timely completion.

Second, over at the New England Journal of Medicine, Dr Bruce Chabner has a perspective piece that looks at the flip side: he argues that some compounds should be considered even earlier for accelerated approval, using the example of Plexxikon’s much-heralded PLX4032, which showed an amazing 80% response rate in Metastatic Melanoma (albeit in a very small sample of 38 patients).

I would argue that we are just now starting to get enough experience to have a very good conversation about accelerated approval and how to improve it -- still, less than 50 data points (47 approved indications) means that we need to remind ourselves that we're still mostly in the land of anecdote. However, it may be time to ask: how much does delay truly cost us in terms of our overall health? What is the cost of delayed approval (how many patients may potentially suffer from lack of access), and correspondingly what is the cost of premature approval and/or delayed confirmation (how many patients are exposed to ineffective and toxic treatments)?

The good news, to me, is that we're finally starting to collect enough information to make a rational estimate of these questions.

Monday, April 4, 2011

Nice WSJ article on p-values

The Wall Street Journal has a brief but useful lay overview of the concept of statistical significance. Without mentioning them by name, it provides accurate synopses of some of the least understood aspects of clinical trial data (the related-but-quite-different concept of clinical significance and the problem of multiplicity). Although ostensibly about the US Supreme Court's refusal to accept statistical significance as a standard for public disclosure of adverse event reports in its recent Matrixx ruling, the article has broad applicability, and I'm always happy to see these concepts clearly articulated.

Friday, March 25, 2011

Mind the Gap

Modern clinical trials in the pharmaceutical industry are monuments of rigorous analysis. Trial designs are critically examined and debated extensively during the planning phase – we strain to locate possible sources of bias in advance, and adjust to minimize or compensate for them. We collect enormous quantities of efficacy and safety data using standardized, pre-validated techniques. Finally, a team of statisticians parses the data (adhering, of course, to an already-set-in-stone Statistical Analysis Plan to avoid the perils of post-hoc analysis) … then we turn both the data and the analysis over in their entirety to regulatory authorities, who in turn do all they can to verify that the results are accurate, correctly interpreted, and clinically relevant.

It is ironic, then, that our management of these trials is so casual and driven by personal opinions. We all like to talk a good game about metrics, but after the conversation we lapse back into our old, distinctly un-rigorous, habits. Example of this are everywhere once you start to look for them: one that just caught my eye is from a recent Center Watch article:

Survey: Large sites winning more trials than small

Are large sites—hospitals, academic medical centers—getting all the trials, while smaller sites continue to fight for the work that’s left over?

That’s what results of a recent survey by Clinical Research Site Training (CRST) seem to indicate. The nearly 20-year-old site-training firm surveyed 500 U.S. sites in December 2010, finding that 66% of large sites say they have won more trials in the last three years. Smaller sites weren’t asked specifically, but anecdotally many small and medium-sized sites reported fewer trials in recent years.
Let me repeat that last part again, with emphasis: “Smaller sites weren’t asked specifically, but anecdotally…

At this point, the conversation should stop. Nothing to see here, folks -- we don’t actually have evidence of anything, only a survey data point juxtaposed with someone’s personal impression -- move along.

So what are we to do then? I think there are two clear areas where we collectively need to improve:

1. Know what we don’t know. The sad and simple fact is that there are a lot of things we just don’t have good data on. We need to resist the urge to grasp at straws to fill those knowledge gaps – it leaves the false impression that we’ve learned something.

2. Learn from our own backyard. As I mentioned earlier, good analytic practices are pervasive on the executional side of trials. We need to think more rigorously about our data needs, earlier in the process.

The good news is that we have everything we need to make this a reality – we just need to have a bit of courage to admit the gap (or, occasionally, chasm) of our ignorance on a number of critical issues and develop a thoughtful plan forward.

Thursday, March 24, 2011

People Who Disagree with Me Tend to End Up Being Investigated by the Federal Government

I don’t think this qualifies yet as a trend, but two disturbing announcements came right back to back last week:

First: As you’ve probably heard, KV Pharmaceutical caused quite a stir when they announced the pricing for their old-yet-new drug Makena. In response, Senators Sherrod Brown (D-OH) and Amy Klobuchar (D-MN) sent a letter to the FTC demanding they “initiate a formal investigation into any potential anticompetitive conduct” by KV. In explaining his call for the investigation, Brown notes:

Since KV Pharmaceuticals announced the intended price hike, I called on KV Pharmaceuticals to immediately reconsider their decision, but to this date the company continues to defend this astronomical price increase.

Second: One week after an FDA Advisory Committee voted 13 to 4 to recommend approving Novartis’s COPD drug indacaterol, Public Citizen wrote a letter to the US Office of Human Research Protections requesting the Novartis be investigated for conducting the very trials that supplied the evidence for that vote. The reason? Despite the fact that the FDA requested the trials be placebo controlled, Public Citizen feels that Novartis should not have allowed patients to be on placebo. The letter shows no apparent consideration for the idea that a large number of thoughtful, well-informed people considered the design of these trials and came to the conclusion that they were ethical (not only the FDA, but the independent Institutional Review Boards and Ethics Committees that oversaw each trial). Instead, Public Citizen blithely “look[s] forward to OHRP’s thorough and careful investigation of our allegations.”

The upshot of these two announcements seems to be: “we don’t like what you’re doing, and since we can’t get you to stop, we’ll try to initiate a federal investigation.” Even if neither of these efforts succeed they will still cause the companies involved to spend a significant amount of time and money defending themselves. In fact, maybe that’s the point: neither effort seems like a serious claim that actual laws were broken, but rather just an attempt at intimidation.

Tuesday, March 22, 2011

Go Green, Recycle your Patients

Euthymics, a small Massachusetts-based biotech, recently announced the start of the TRIADE trial, which they describe as “Phase 2b/3a”. I am guessing that that somewhat-rare designation means they’re hoping that this will count as a pivotal trial, but have not yet had formal agreement from FDA on that topic. Part of this may be due to the trial’s design – per the press release, they’re using a Sequential Parallel Comparison Design (SPCD).

This is an intriguing trial design because it takes one of the benefits of traditional crossover designs – increasing statistical power by “reusing” patients in multiple treatments – while avoiding many of the problems, most notably any concerns about the persistence of treatment effect. This is because only a select but key subset of patients – those who were in the control arm but showed no response – are re-randomized to both arms. This group clearly has no treatment effect to persist, so they make an excellent population to further test with. (It’s important to note that all patients are continued on treatment in order to preserve blinding.)

In essence, we have a placebo run-in phase embedded within a traditional trial. It seems worth asking how this trial design compares against a simpler trial that includes such a run-in – I do not see any information on the website to help answer that.

And that points to the major drawback of the SPCD: it’s patented, and therefore not freely available to study and use. As far as I can tell, the design has not been through an FDA Special Protocol Assessment yet, which would certainly be a critical rite of passage towards greater acceptance. While I can appreciate the inventors’ desire to be rewarded for their creative breakthrough in devising the SPCD (and wish them nothing but good fortune for it), it appears that keeping the design proprietary may slow down efforts to validate and promote its use.

Monday, March 21, 2011

From Russia with (3 to 20 times more) Love

Russia’s Clinical Trials are a Thriving Business”, trumpeted the news release that came to my inbox the other day. Inside was a rather startling – and ever-so-slightly odd – claim:
NPR Marketplace Health Desk Reporter Gregory Warner uncovers the truths about clinical trials in Russia; namely, the ability for biopharmaceutical companies to enroll patients 3 to 20 times faster than in the more established regions of North America and Western Europe.
Of course, as you might expect, the NPR reporter does not “uncover” that – rather, the 3 to 20 times faster “truth” is simply a verbatim statement from the CEO of ClinStar, a CRO specializing in running trials in Russia and Eastern Europe. There is no explanation of the 3-to-20 number, or why there is such a wide confidence interval (if that’s what that is).

The full NPR story goes on to hint that the business of Russian clinical trials may be a bit on the ethically cloudy side by associating it with past practices of lavishing gifts and attention on leading physicians (no direct tie is made – the reporter however not so subtly notes the fact that one person who used to work in Russia as a drug rep now works in clinical trials). I think the implication here is that Russia gets results by any means necessary, and the pharma industry is excitedly queuing up to get its trials done faster.

However, this speed factor is coupled with the extremely modest claim that clinical trial business in Russia is “growing at 15% a years.” While this is certainly not a bad rate of growth, it’s hardly explosive. It’s in fact comparable to the revenue growth of the overall CRO market for the few years preceding the current downturn, estimated at 12.2%, and dwarfed by the estimated 34% annual growth of the industry in India.

From my perspective, the industry seems very hesitant to put too many eggs in Eastern Europe’s basket just yet. We need faster trials, certainly, but we need reliable and clean data even more. Recent troubling research experience with Russia -- most notably the dimebon fiasco, where overwhelming positive data in Russian phase 2 trials have turned out to be completely irreproducible in larger western trials –has left the industry wary about the region. And wink-and-nod publicity about incredible speed gains probably will ultimately hurt wider acceptance of Eastern European trials more than it will help.

Sunday, March 20, 2011

1st-Person Accounts of Trial Participation

Two intriguing articles on participation in clinical trials were published this week. Both happen to be about breast cancer, but both touch squarely on some universal points:

ABC News features patient Haralee Weintraub, who has enrolled in 5 trials in the past 10 years. While she is unusual for having participated in so many studies, Weintraub’s offers great insights into the barriers and benefits of being in the trial, including the fact that many benefits – such as close follow-up and attention from the treatment team -- are not obvious at first.

Meanwhile, the New York Times’ recurring column from Dr Peter Bach on his wife’s breast cancer offers a moving description of her consent into a trial. His essay focuses mainly on the incremental, slow pace of cancer research (“this arduous slog”) and how it is both incredibly frustrating and absolutely necessary for long-term improvements in treatment.

Wednesday, March 16, 2011

Realistic Optimism in Clinical Trials

The concept of “unrealistic optimism” among clinical trial participants has gotten a fair bit of press lately, mostly due to a small study published in IRB: Ethics and Human Research. (I should stress the smallness of the study: it was a survey given to 72 blood cancer patients. This is worth noting in light of the slightly-bizarre Medscape headline that optimism “plagues” clinical trials.)

I was therefore happy to see this article reporting out of the Society for Surgical Oncology. In looking at breast cancer outcomes between surgical oncologists and general surgeons, the authors appear to have found that most of the beneficial outcomes among patients treated by surgical oncologist can be ascribed to clinical trial participation. Some major findings:
  • 56% of patients treated by a surgical oncologist participated in a trial, versus only 7% of those treated by a general surgeon
  • Clinical trial patients had significantly longer median follow-up than non-participants (44.6 months vs. 38.5 months)
  • Most importantly, clinical trial patients had significantly improved overall survival at 5 years than non-participants (31% vs. 26%)

Of course, the study reported on in the IRB article did not compare non-trial participants’ attitudes, so these aren’t necessarily contradictory results. However, I suspect that the message of “clinical trial participation” entails “better follow-up” entails “improved outcomes” will not get the same eye-catching headline in Medscape. Which is a shame, since we already have enough negative press about clinical trials out there.

Tuesday, March 1, 2011

What is the Optimal Rate of Clinical Trial Participation?

The authors of EDICT's white paper, in their executive summary, take a bleak view of the current state of clinical trial accrual:

Of critical concern is the fact that despite numerous years of discussion and the implementation of new federal and state policies, very few Americans actually take part in clinical trials, especially those at greatest risk for disease. Of the estimated 80,000 clinical trials that are conducted every year in the U.S., only 2.3 million Americans take part in these research studies -- or less than one percent of the entire U.S. population.
The paper goes on to discuss the underrepresentation of minority populations in clinical trials, and does not return to this point. And while it's certainly not central to the paper's thesis (in fact, in some ways it works against it), it is a perception that certainly appears to a common one among those involved in clinical research.

When we say that "only" 2.3 million Americans take part in clinical research, we rely directly on an assumption that more than 2.3 million Americans should take part.

This leads immediately to the question: how many more?

If we are trying to increase participation rates, the magnitude of the desired improvement is one of the first and most central facts we need. Do we want a 10% increase, or a 10-fold increase? The steps required to achieve these will be radically different, so it would seem important to know.

It should also be pointed out: in some very real sense, the ideal rate of clinical trial participation, at least for pre-marketing trials, is 0%. Participating in these trial by definition means being potentially exposed to a treatment that the FDA believes has insufficient evidence of safety and/or efficacy. In an ideal world, we would not expose any patient to that risk. Even in today's non-ideal world, we have already decided not to expose any patients to medication that have not produced some preliminary evidence of safety and efficacy in animals. That is, we have already established one threshold below which we believe human involvement is unacceptably risky -- in a better world, with more information, we would raise that threshold much higher than the current criteria for IND approval.

This is not just a hypothetical concern. Where we set our threshold for acceptable risk should drive much of our thinking about how much we want to encourage (or discourage) people from shouldering that risk. Landmine detection, for example, is a noble but risky profession: we may agree that it is acceptable for rational adults to choose to enter into that field, and we may certainly applaud their heroism. However, that does not mean that we will unanimously agree on how many adults should be urged to join their ranks, nor does it mean that we will not strive and hope for the day that no human is exposed to that risk.

So, we're not talking about the ideal rate of participation, we're talking about the optimal rate. How many people should get involved, given a) the risks involved in being exposed to investigational treatment, against b) the potential benefit to the participant and/or mankind? For how many will the expected potential benefit outweigh the expected total cost? I have not seen any systematic attempt to answer this question.

The first thing that should be obvious here is that the optimal rate of participation should vary based upon the severity of the disease and the available, approved medications to treat it. In nonserious conditions (eg, keratosis pilaris), and/or conditions with a very good recovery rate (eg, veisalgia), we should expect participation rates to be low, and in some cases close to zero in the absence of major potential benefit. Conversely, we should desire higher participation rates in fatal conditions with few if any legitimate treatment alternatives (eg, late-stage metastatic cancers). In fact, if we surveyed actual participation rates by disease severity and prognosis, I think we would find that this relationship generally holds true already.

I should qualify the above by noting that it really doesn't apply to a number of clinical trial designs, most notably observational trials and phase 1 studies in healthy volunteers. Of course, most of the discussion around clinical trial participation does not apply to these types of trials, either, as they are mostly focused on access to novel treatments.