Wednesday, February 22, 2017

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.
Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

ResearchBlogging.org Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a

Tuesday, February 7, 2017

Jerry Matczak

Jerry Matczak passed away suddenly last Thursday at the much-too-young age of 54.

I can say, without exaggeration, that Jerry embodied pretty much everything I aspire to be in my professional life. The MedCityNews headline called him a “social media guru”, but in reality he was temperamentally the exact opposite of a "guru":

He was constantly curious; it seemed that every conversation I had with him was composed mainly of questions. Many of us try to be “listen first, talk second” types, but Jerry was a “listen first, ask questions, listen some more, then talk” type.

He also never stopped trying to figure out how to improve whatever he was working on. He participated in a lot of pilot projects, which means he was a part of a lot of projects that didn’t meet their objectives – but I never witnessed Jerry being the least bit negative or frustrated. Every project was just another opportunity to learn more.

Mostly, though, Jerry was remarkable in his ability to connect with patients, even patients who were deeply distrustful of his employer and industry. If nothing else, I hope you read the words of two such patients, coming from very different places, with remarkably similar reactions to Jerry:


Jerry, thank you for your service and your example. I carry it with me.


Monday, November 21, 2016

The first paid research subject in written history?

On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject.

According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man.

Some interesting things to note about this experiment:
  • Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all".
  • Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions
  • Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story).
  • Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. 
Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article:
The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet.
…and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”.

The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea.

[Image credit: Wellcome Images]