Tuesday, May 23, 2017

REMOTE Redux: DTP trials are still hard

Maybe those pesky sites are good for something after all. 

It's been six years since Pfizer boldly announced the launch of its "clinical trial in a box". The REMOTE trial was designed to be entirely online, and involved no research sites: study information and consent was delivered via the web, and medications and diaries were shipped directly to patients' homes.

Despite the initial fanfare, within a month REMOTE's registration on ClinicalTrials.gov was quietly reduced from 600 to 283. The smaller trial ended not with a bang but a whimper, having randomized only 18 patients in over a year of recruiting.

Still, the allure of direct to patient clinical trials remains strong, due to a confluence of two factors. First, a frenzy of interest in running "patient centric clinical trials". Sponsors are scrambling to show they are doing something – anything – to show they have shifted to a patient-centered mindset. We cannot seem to agree what this means (as a great illustration of this, a recent article in Forbes on "How Patients Are Changing Clinical Trials" contained no specific examples of actual trials that had been changed by patients), but running a trial that directly engages patients wherever they are seems like it could work.

The less-openly-discussed other factor leading to interest in these DIY trials is sponsors' continuing willingness to heap almost all of the blame for slow-moving studies onto their research sites. If it’s all the sites’ fault – the reasoning goes – then cutting them out of the process should result in trials that are both faster and cheaper. (There are reasons to be skeptical about this, as I have discussed in the past, but the desire to drop all those pesky sites is palpable.)

However, while a few proof-of-concept studies have been done, there really doesn't seem to have been another trial to attempt a full-blown direct-to-patient clinical trial. Other pilots have been more successful, but had fairly lightweight protocols. For all its problems, REMOTE was a seriously ambitious project that attempted to package a full-blown interventional clinical trial, not an observational study.

In this context, it's great to see published results of the TAPIR Trial in vasculitis, which as far as I can tell is the first real attempt to run a DIY trial of a similar magnitude to REMOTE.

TAPIR was actually two parallel trials, identical in every respect except for their sites: one trial used a traditional group of 8 sites, while the other was virtual and recruited patients from anywhere in the country. So this was a real-time, head-to-head assessment of site performance.

And the results after a full two years of active enrollment?
  • Traditional sites: 49 enrolled
  • Patient centric: 10 enrolled
Even though we’re six years later, and online/mobile communications are even more ubiquitous, we still see the exact same struggle to enroll patients.

Maybe it’s time to stop blaming the sites? To be fair, they didn’t exactly set the world on fire – and I’m guessing the total cost of activating the 8 sites significantly exceeded the costs of setting up the virtual recruitment and patient logistics. But still, the site-less, “patient centric” approach once again came up astonishingly short.


ResearchBlogging.org Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, Davis T, Kullman J, Carette S, Khalidi N, Koening C, Langford CA, Monach P, Moreland L, Pagnoux C, Specks U, Sreih AG, Ytterberg S, Merkel PA, & Vasculitis Clinical Research Consortium. (2017). Experience With Direct-to-Patient Recruitment for Enrollment Into a Clinical Trial in a Rare Disease: A Web-Based Study. Journal of medical Internet research, 19 (2) PMID: 28246067

Thursday, March 30, 2017

Retention metrics, simplified

[Originally posted on First Patient In]

In my experience, most clinical trials do not suffer from significant retention issues. This is a testament to the collaborative good will of most patients who consent to participate, and to the patient-first attitude of most research coordinators.

However, in many trials – especially those that last more than a year – the question of whether there is a retention issue will come up at some point while the trial’s still going. This is often associated with a jump in early terminations, which can occur as the first cohort of enrollees has been in the trial for a while.

It’s a good question to ask midstream: are we on course to have as many patients fully complete the trial as we’d originally anticipated?

However, the way we go about answering the question is often flawed and confusing. Here’s an example: a sponsor came to us with what they thought was a higher rate of early terminations than expected. The main problem? They weren't actually sure.

Here’s their data. Can you tell?

Original retention graph. Click to enlarge.
If you can, please let me know how! While this chart is remarkably ... full of numbers, it provides no actual insight into when patients are dropping out, and no way that I can tell to project eventual total retention.

In addition, measuring the “retention rate” as a simple ratio of active to terminated patients will not provide an accurate benchmark until the trial is almost over. Here's why: patients tend to drop out later in a trial, so as long as you’re enrolling new patients, your retention rate will be artificially high. When enrollment ends, your retention rate will appear to drop rapidly – but this is only because of the artificial lift you had earlier.

In fact, that was exactly the problem the sponsor had: when enrollment ended, the retention rate started dropping. It’s good to be concerned, but it’s also important to know how to answer the question.

Fortunately, there is a very simple way to get a clear answer in most cases – one that’s probably already in use by your  biostats team around the corner: the Kaplan-Meier “survival” curve.

Here is the same study data, but patient retention is simply depicted as a K-M graph. The key difference is that instead of calendar dates, we used the relative measure of time in the trial for each patient. That way we can easily spot where the trends are.


In this case, we were able to establish quickly that patient drop-outs were increasing at a relatively small constant rate, with a higher percentage of drops coinciding with the one-year study visit. Most importantly, we were able to very accurately predict the eventual number of patients who would complete the trial. And it only took one graph!




Saturday, March 18, 2017

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]

Wednesday, February 22, 2017

Establishing efficacy - without humans?

The decade following passage of FDAAA has been one of easing standards for drug approvals in the US, most notably with the advent of “breakthrough” designation created by FDASIA in 2012 and the 21st Century Cures Act in 2016.

Although, as of this writing, there is no nominee for FDA Commissioner, it appears to be safe to say that the current administration intends to accelerate the pace of deregulation, mostly through further lowering of approval requirements. In fact, some of the leading contenders for the position are on record as supporting a return to pre-Kefauver-Harris days, when drug efficacy was not even considered for approval.
Build a better mouse model, and pharma will
beat a path to your door - no laws needed.

In this context, it is at least refreshing to read a proposal to increase efficacy standards. This comes from two bioethicists at McGill University, who make the somewhat-startling case for a higher degree of efficacy evaluation before a drug begins any testing in humans.
We contend that a lack of emphasis on evidence for the efficacy of drug candidates is all too common in decisions about whether an experimental medicine can be tested in humans. We call for infrastructure, resources and better methods to rigorously evaluate the clinical promise of new interventions before testing them on humans for the first time.
The author propose some sort of centralized clearinghouse to evaluate efficacy more rigorously. It is unclear what they envision this new multispecialty review body’s standards for green-lighting a drug to enter human testing. Instead they propose three questions:
  • What is the likelihood that the drug will prove clinically useful?
  • Assume the drug works in humans. What is the likelihood of observing the preclinical results?
  • Assume the drug does not work in humans. What is the likelihood of observing the preclinical results?
These seem like reasonable questions, I suppose – and are likely questions that are already being asked of preclinical data. They certainly do not rise to the level of providing a clear standard for regulatory approval, though perhaps it’s a reasonable place to start.

The most obvious counterargument here is one that the authors curiously don’t pick up on at all: if we had the ability to accurately (or even semiaccurately) predict efficacy preclinically, pharma sponsors would already be doing it. The comment notes: “More-thorough assessments of clinical potential before trials begin could lower failure rates and drug-development costs.” And it’s hard not to agree: every pharmaceutical company would love to have even an incrementally-better sense of whether their early pipeline drugs will be shown to work as hoped.

The authors note
Commercial interests cannot be trusted to ensure that human trials are launched only when the case for clinical potential is robust. We believe that many FIH studies are launched on the basis of flimsy, underscrutinized evidence.
However, they do not produce any evidence that industry is in any way deliberately underperforming their preclinical work, merely that preclinical efficacy is often difficult to reproduce and is poorly correlated with drug performance in humans.

Pharmaceutical companies have many times more candidate compounds than they can possibly afford to put into clinical trials. Figuring out how to lower failure rates – or at least the total cost of failure - is a prominent industry obsession, and efficacy remains the largest source of late-stage trial failure. This quest to “fail faster” has resulted in larger and more expensive phase 2 trials, and even to increased efficacy testing in some phase 1 trials. And we do this not because of regulatory pressure, but because of hopes that these efforts will save overall costs. So it seems beyond probable that companies would immediately invest more in preclinical efficacy testing, if such testing could be shown to have any real predictive power. But generally speaking, it does not.

As a general rule, we don’t need regulations that are firmly aligned with market incentives, we need regulations if and when we think those incentives might run counter to the general good. In this case, there are already incredibly strong market incentives to improve preclinical assessments. Where companies have attempted to do something with limited success, it would seem quixotic to think that regulatory fiat will accomplish more.

(One further point. The authors try to link the need for preclinical efficacy testing to the 2016 Bial tragedy. This seems incredibly tenuous: the authors speculate that perhaps trial participants would not have been harmed and killed if Bial had been required to produce more evidence of BIA102474’s clinical efficacy before embarking on their phase 1 trials. But that would have been entirely coincidental in this case: if the drug had in fact more evidence of therapeutic promise, the tragedy still would have happened, because it had nothing at all to do with the drug’s efficacy.

This is to some extent a minor nitpick, since the argument in favor of earlier efficacy testing does not depend on a link to Bial. However, I bring it up because a) the authors dedicate the first four paragraphs of their comment to the link, and b) there appears to be a minor trend of using the death and injuries of that trial to justify an array of otherwise-unrelated initiatives. This seems like a trend we should discourage.)

[Update 2/23: I posted this last night, not realizing that only a few hours earlier, John LaMattina had published on this same article. His take is similar to mine, in that he is suspicious of the idea that pharmaceutical companies would knowingly push ineffective drugs up their pipeline.]

ResearchBlogging.org Kimmelman, J., & Federico, C. (2017). Consider drug efficacy before first-in-human trials Nature, 542 (7639), 25-27 DOI: 10.1038/542025a

Tuesday, February 7, 2017

Jerry Matczak

Jerry Matczak passed away suddenly last Thursday at the much-too-young age of 54.

I can say, without exaggeration, that Jerry embodied pretty much everything I aspire to be in my professional life. The MedCityNews headline called him a “social media guru”, but in reality he was temperamentally the exact opposite of a "guru":

He was constantly curious; it seemed that every conversation I had with him was composed mainly of questions. Many of us try to be “listen first, talk second” types, but Jerry was a “listen first, ask questions, listen some more, then talk” type.

He also never stopped trying to figure out how to improve whatever he was working on. He participated in a lot of pilot projects, which means he was a part of a lot of projects that didn’t meet their objectives – but I never witnessed Jerry being the least bit negative or frustrated. Every project was just another opportunity to learn more.

Mostly, though, Jerry was remarkable in his ability to connect with patients, even patients who were deeply distrustful of his employer and industry. If nothing else, I hope you read the words of two such patients, coming from very different places, with remarkably similar reactions to Jerry:


Jerry, thank you for your service and your example. I carry it with me.


Monday, November 21, 2016

The first paid research subject in written history?

On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject.

According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man.

Some interesting things to note about this experiment:
  • Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all".
  • Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions
  • Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story).
  • Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. 
Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article:
The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet.
…and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”.

The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea.

[Image credit: Wellcome Images]




Monday, July 25, 2016

Will Your Family Make You a Better Trial Participant?

It is becoming increasing accepted within the research community that patient engagement leads to a host of positive outcomes – most importantly (at least practically speaking) improved clinical trial recruitment and retention.

But while we can all agree that "patient engagement is good" in a highly general sense, we don't have much consensus on what the implications of that idea might be. There is precious little hard evidence about how to either attract engaged patients, or how we might effectively turn "regular patients" into "engaged patients".

That latter point - that we could improve trial enrollment and completion rates by converting the (very large) pool of less-engaged patient - is a central tenet of the mHealth movement in clinical trials. Since technology can now accompany us almost anywhere, it would seem that we have an unprecedented opportunity to reach out and connect with current and potential trial participants.

However, there are signs that this promised revolution in patient engagement hasn't come about. From the decline of new apps being downloaded to the startlingly high rate of people abandoning their wearable health devices, there's a growing body of evidence suggesting that we aren't in fact making very good progress towards increasing engagement. We appear to have underestimated the inertia of the disengaged patient.

So what can we do? We know people like their technology, but if they're not using it to engage with their healthcare decisions, we're no better off as a result.

Daniel Calvert, in a recent blog post at Parallel 6 offers an intriguing solution: he suggests we go beyond the patient and engage their wider group of loved ones. By engaging what Calvert calls the Support Circle - those people most likely to "encourage the health and well being of that patient as they undergo a difficult period of their life" - trial teams will find themselves with a more supported, and therefore more engaged, participant, with corresponding benefits to enrollment and retention. 

Calvert outlines a number of potential mechanisms to get spouses, children, and other loved ones involved in the trial process:
During the consent process the patient can invite their support team in with them. A mobile application can be put on their phones enabling encouraging messages, emails, and texts to be sent. Loved ones can see if their companion or family member did indeed take today’s medication or make last Monday’s appointment. Gamification offers badges or pop-ups: “Two months of consecutive appointments attended” or “perfect eDiary log!” Loved ones can see those notifications, like/comment, and constantly encourage the patients. 
Supporting materials can also be included in the Support Circle application. There are a host of unknown terms to patients and their team. Glossaries, videos, FAQs, contact now, and so much more can be made available at their fingertips.
I have to admit I'm fascinated by Calvert's idea. I want him to be right: the picture of supportive, encouraging, loving spouses and children standing by to help a patient get through a clinical trial is an attractive one. So is the idea that they're just waiting for us to include them - all we need to do is a bit of digital communication with them to get them fully on board as members of the study team.

The problem, however, remains: we have absolutely no evidence that this approach will work. There is no data showing that it is superior to other approaches to engage trial patients.

(In fact, we may even have some indirect evidence that it may hinder enrollment: in trials that require active caregiver participation, such as those in Alzheimer's Disease, caregivers are believed to often contribute to the barriers to patient enrollment).

Calvert's idea is a good one, and it's worthy of consideration. More importantly, it's worthy of being rigorously tested against other recruitment and retention approaches. We have a lot of cool new technologies, and even more great ideas - we're not lacking for those. What we're lacking is hard data showing us how these things perform. What we especially need is comparative data showing how new tactics work relative to other approaches.

Over 5 years ago, I wrote a blog post bemoaning the sloppy approaches we take in trial recruitment - a fact made all the more painfully ironic by the massive intellectual rigor of the trials themselves. I'm not at all sure that we've made any real progress in those 5 years.

In my next post, I'll outline what I believe are some of the critical steps we need to take to improve the current situation, and start bringing some solid evidence to the table along with our ideas.

[Photo credit: Flikr user Matthew G, "Love (of technology)"]




Tuesday, July 14, 2015

Waiver of Informed Consent - proposed changes in the 21st Century Cures Act

Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act:


In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets  did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion.

Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say:


  1. Waiving informed consent is not new; it's already permitted under current regs
  2. The standards for obtaining a waiver of consent are stringent
  3. They may, in fact, be too stringent in a small number of situations
  4. The act may, in fact, be helpful in those situations
  5. Feuerstein may, in fact, need to chill out a little bit


(For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.)

Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent:

…the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings.

[emphasis  mine]

Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons.

These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met:

 (1) The research involves no more than minimal risk to the subjects;
 (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
 (3) The research could not practicably be carried out without the waiver or alteration; and
 (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3).

That word “practicably” is a doozy.

Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart.

Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial.

It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form.

The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient.  Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit.

The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations.

Which leads to two questions:

1.       Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety.
2.       Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses.


Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost.

Ultimately, it looks like a very small, but positive, change to the current rules.

The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm.

Thursday, July 31, 2014

Patient Centered Trials - Your Thoughts Needed

The good folks down at eyeforpharma have asked me to write a few blog posts in the run-up to their Patient Centered Clinical Trials conference in Boston this September. In my second article -Buzzword Innovation: The Patient Centricity “Fad” and the Token Patient - I went over some concerns I have regarding the sudden burst of enthusiasm for patient centricity in the clinical trial world.

Apparently, that hit a nerve – in an email, Ulrich Neumann tells me that “your last post elicited quite a few responses in my inbox (varied, some denouncing it as a fad, others strongly protesting the notion, hailing it as the future).”

In preparing my follow up post, I’ve spoken to a couple people on the leading edge of patient engagement:


In addition to their thoughts, eyeforpharma is keenly interested in hearing from more people. They've even posted a survey – from Ulrich:
To get a better idea of what other folks think of the idea, I am sending out a little ad hoc survey. Only 4 questions (so people hopefully do it). Added benefit: There is a massive 50% one-time discount for completed surveys until Friday connected to it as an incentive).
So, here are two things for you to do:

  1. Complete the survey and share your thoughts
  2. Come to the conference and tell us all exactly what you think

Look forward to seeing you there.

[Conflict of Interest Disclosure: I am attending the Patient Centered Clinical Trials conference. Having everyone saying the same thing at such conferences conflicts with my ability to find them interesting.]


Tuesday, March 18, 2014

These Words Have (Temporarily) Relocated

Near the end of last year, I had the bright idea of starting a second blog, Placebo Lead-In, to capture a lot of smaller items that I found interesting but wasn't going to work up into a full-blown, 1000 word post.

According to Murphy’s Law, or the Law of Unintended Consequences, or the Law of Biting Off More Than You Can Chew, or some such similar iron rule of the universe, what happened next should have been predictable.

First, my team at CAHG Trials launched a new blog, First Patient In. FPI is dedicated to an open discussion of patient recruitment ideas, and I’m extremely proud of what we've published so far.

Next, I was invited to be a guest blogger for the upcoming Partnerships in Clinical Trials Conference.

Suddenly, I've gone from 1 blog to 4. And while my writing output appears to have increased, it definitely hasn't quadrupled. So this blog has been quiet for a bit too long as a result.

The good news is that the situation is temporary - Partnerships will actually happen at the end of this month. (If you’re going: drop me a line and let’s meet. If you’re not: you really should come and join us!) My contributions to FPI will settle into a monthly post, as I have a fascinating and clever team to handle most of the content.

In case you've missed it, then, here is a brief summary of my posts elsewhere over the past 2 months.

First Patient In


Partnerships in Clinical Trials



Please take a look, and I will see you back here soon.

[Photo credit: detour sign via Flikr user crossley]

Sunday, January 12, 2014

Megafund versus Megalosaurus: Funding Drug Development


This new 10-minute TEDMED talk is getting quite a bit of attention:


 (if embedded video does not work, try the TED site itself.)

In it, Roger Stein claims to have created an approach to advancing drugs through clinical trials that will "fundamentally change the way research for cancer and lots of other things gets done".

Because the costs of bringing a drug to market are so high, time from discovery to marketing is so long, and the chances of success of any individual drug are so grim, betting on any individual drug is foolish, according to Stein. Instead, risks for a large number of potential assets should be pooled, with the eventual winners paying for the losers.

To do this, Stein proposes what he calls a "megafund" - a large collection of assets (candidate therapies). Through some modeling and simulations, Stein suggests some of the qualities of an ideal megafund: it would need in the neighborhood of $3-15 billion to acquire and manage 80-150 drugs. A fund of this size and with these assets would be able to provide an equity yield of about 12%, which would be "right in the investment sweet spot of pension funds and 401(k) plans".

Here's what I find striking about those numbers: let's compare Stein's Megafund to everyone's favorite Megalosaurus, the old-fashioned Big Pharma dinosaur sometimes known as Pfizer:


Megafund
(Stein)
Megalosaurus
(Pfizer)
Funding
$3-15 billion
$9 billion estimated 2013 R&D spend
Assets
80-150
81 (in pipeline, plus many more in preclinical)
Return on Equity
12% (estimated)
9.2% (last 10 years) to 13.2% (last 5)
Since Pfizer's a dinosaur, it can't possibly compete with
the sleek, modern Megafund, right? Right?

These numbers look remarkably similar. Pfizer - and a number of its peers - are spending Megafund-sized budget each year to shepherd through a Megafund-sized number of compounds. (Note many of Pfizer's peers have substantially fewer drugs in their published pipelines, but they own many times more compounds - the pipeline is just the drugs what they've elected to file an IND on.)

What am I missing here? I understand that a fund is not a company, and there may be some benefits to decoupling asset management decisions from actual operations, but this won't be a tremendous gain, and would presumably be at least partially offset by increased transaction costs (Megafund has to source, contract, manage, and audit vendors to design and run all its trials, after all, and I don't know why I'd think it could do that any more cheaply than Big Pharma can). And having a giant drug pipeline's go/no go decisions made by "financial engineers" rather than pharma industry folks would seem like a scenario that's only really seen as an upgrade by the financial engineers themselves.

A tweet from V.S. Schulz pointed me to a post on Derek Lowe's In the Pipeline blog. which lead to a link to this paper by Stein and 2 others in Nature Biotechnology from a year and a half ago. The authors spend most of their time differentiating themselves from other structures in the technical, financial details rather than explaining why megafund would work better at finding new drugs. However, they definitely think this is qualitatively different from existing pharma companies, and offer a couple reasons. First,
[D]ebt financing can be structured to be more “patient” than private or public equity by specifying longer maturities; 10- to 20-year maturities are not atypical for corporate bonds. ... Such long horizons contrast sharply with the considerably shorter horizons of venture capitalists, and the even shorter quarterly earnings cycle and intra-daily price fluctuations faced by public companies.
I'm not sure where this line of though is coming from. Certainly all big pharma companies' plans extend decades into the future - there may be quarterly earnings reports to file, but that's a force exerted far more on sales and marketing teams than on drug development. The financing of pharmaceutical development is already extremely long term.

Even in the venture-backed world, Stein and team are wrong if they believe there is pervasive pressure to magically deliver drugs in record time. Investors and biotech management are both keenly aware of the tradeoffs between speed and regulatory success. Even this week's came-from-nowhere Cinderella story, Intercept Pharmaceuticals, was founded with venture money over a decade ago - these "longer maturities" are standard issue in biotech. We aren't making iPhone apps here, guys.

Second,
Although big pharma companies are central to the later stages of drug development and the marketing and distributing of approved drugs, they do not currently play as active a role at the riskier preclinical and early stages of development
Again, I'm unsure why this is supposed to be so. Of Pfizer's 81 pipeline compounds, 55 are in Phase 1 or 2 - a ratio that's pretty heavy on early, risky project, and that's not too different from industry as a whole. Pfizer does not publish data on the number of compounds it currently has undergoing preclinical testing, but there's no clear reason I can think of to assume it's a small number.

So, is Megafund truly a revolutionary idea, or is it basically a mathematical deck-chair-rearrangement for the "efficiencies of scale" behemoths we've already got?

[Image: the world's first known dino, Megalosaurus, via Wikipedia.]

Monday, January 6, 2014

Can a Form Letter from FDA "Blow Your Mind"?

Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information.


However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic.


However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA.


He claims that the contents of the letter will "blow your mind". To him, the key passage is:


We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular.


Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!"
Adam Feuerstein: This time,
too much froth, not enough coffee?


I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review.


It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples.


A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications.

Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval.

We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process.