Monday, January 6, 2014

Can a Form Letter from FDA "Blow Your Mind"?

Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information.


However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic.


However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA.


He claims that the contents of the letter will "blow your mind". To him, the key passage is:


We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular.


Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!"
Adam Feuerstein: This time,
too much froth, not enough coffee?


I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review.


It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples.


A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications.

Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval.

We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process.

Thursday, January 2, 2014

The Coming of the MOOCT?

Big online studies, in search of millions of participants.

Back in September, I enrolled in the Heath eHeart Study - an entirely online research study tracking cardiac health. (Think Framingham Heart, cast wider and shallower - less intensive follow-up, but spread out to the entire country.)


[In the spirit of full disclosure, I should note that I haven’t completed any follow-up activities on the Heath eHeart website yet. Yes, I am officially part of the research adherence problem…]


Yesterday, I learned of the Quantified Diet Project, an entirely online/mobile app-supported randomized trial of 10 different weight loss regimens. The intervention is short - only 4 weeks - but that’s probably substantially longer than most New Year diets manage to last, and should be just long enough to detect some early differences among the approaches.


I have been excited about the potential for online medical research for quite some time. For me, the real beginning was when PatientsLikeMe published the results of their online lithium for ALS research study - as I wrote at the time, I have never been so enthused about a negative trial before or since.



That was two and a half years ago, and there hasn't been a ton of activity since then outside of PatientsLikeMe (who have expanded and formalized their activities in the Open Research Exchange). So I’m eager to hear how these two new studies go. There are some interesting similarities and differences:


  • Both are university/private collaborations, and both (perhaps unsurprisingly) are rooted in California: Heath eHeart is jointly run by UCSF and the American Heart Association, while Quantified Diet is run by app developer Lift with scientific support from a (unidentified?) team at Berkeley.
  • Both are pushing for a million or more participants, dwarfing even very large traditional studies by orders of magnitude.
  • Health eHeart is entirely observational, and researchers will have the ability to request its data to test their own hypotheses, whereas Quantified Diet is a controlled, randomized trial.


Data entry screen on Health eHeart
I really like the user interface for Heath eHeart - it’s extremely simple, with a logical flow to the sections. It clearly appears to be designed for older participants, and the extensive data intake is subdivided into a large number of subsections, each of which can typically be completed in 2-4 minutes.



I have not enrolled into the Quantified Diet, but it appears to have a strong social media presence. You can follow the Twitter conversation through the #quantdiet hashtag. The semantic web and linked data guru Kerstin Forsberg has already posted about joining, and I hope to hear more from her and from clinical trial social media expert Rahlyn Gossen, who’s also joined.


To me, probably the most intriguing technical feature of the QuantDiet study is its “voluntary randomization” design. Participants can self-select into the diet of their choice, or can choose to be randomly assigned by the application. It will be interesting to see whether any differences emerge between the participants who chose a particular arm and those who were randomized into that arm - how much does a person’s preference matter?


In an earlier tweet I asked, “is this a MOOCT?” - short for Massive Open Online Clinical Trial. I don’t know if that’s the best name for it, and l’d love to hear other suggestions. By any other name, however, these are still great initiatives and I look forward to seeing them thrive in the coming years.

The implications for pharmaceutical and medical device companies is still unclear. Pfizer's jump into world of "virtual trials" was a major bust, and widely second-guessed. I believe there is definitely a role and a path forward here, and these big efforts may teach us a lot about how patients want to be engaged online.

Thursday, December 19, 2013

Patient Recruitment: Taking the Low Road

The Wall Street Journal has an interesting article on the use of “Big Data” to identify and solicit potential clinical trial participants. The premise is that large consumer data aggregators like Experian can target patients with certain diseases through correlations with non-health behavior. Examples given include “a preference for jazz” being associated with arthritis and “shopping online for clothes” being an indicator of obesity.
We've seen this story before.

In this way, allegedly, clinical trial patient recruitment companies can more narrowly target their solicitations* for patients to enroll in clinical trials.

In the spirit of full disclosure, I should mention that I was interviewed by the reporter of this article, although I am not quoted. My comments generally ran along three lines, none of which really fit in with the main storyline of the article:

  1. I am highly skeptical that these analyses are actually effective at locating patients
  2. These methods aren't really new – they’re the same tactics that direct marketers have been using for years
  3. Most importantly, the clinical trials community can – and should – be moving towards open and collaborative patient engagement. Relying on tactics like consumer data snooping and telemarketing is an enormous step backwards.

The first point is this: certainly some diseases have correlates in the real world, but these correlates tend to be pretty weak, and are therefore unreliable predictors of disease. Maybe it’s true that those struggling with obesity tend to buy more clothes online (I don’t know if it’s true or not – honestly it sounds a bit more like an association built on easy stereotypes than on hard data). But many obese people will not shop online (they will want to be sure the clothes actually fit), and vast numbers of people with low or average BMIs will shop for clothes online.  So the consumer data will tend to have very low predictive value. The claims that liking jazz and owning cats are predictive of having arthritis are even more tenuous. These correlates are going to be several times weaker than basic demographic information like age and gender. And for more complex conditions, these associations fall apart.

Marketers claim to solve this by factoring a complex web of associations through a magical black box – th WSJ article mentions that they “applied a computed algorithm” to flag patients. Having seen behind the curtain on a few of these magic algorithms, I can confidently say that they are underwhelming in their sophistication. Hand-wavy references to Big Data and Algorithms are just the tools used to impress pharma clients. (The down side to that, of course, is that you can’t help but come across as big brotherish – see this coverage from Forbes for a taste of what happens when people accept these claims uncritically.)

But the effectiveness of these data slice-n-dicing activities is perhaps beside the point. They are really just a thin cover for old-fashioned boiler room tactics: direct mail and telemarketing. When I got my first introduction to direct marketing in the 90’s, it was the exact same program – get lead lists from big companies like Experian, then aggressively mail and call until you get a response.

The limited effectiveness and old-school aggressiveness of these programs comes is nicely illustrated in the article by one person’s experience:
Larna Godsey, of Wichita, Kan., says she received a dozen phone calls about a diabetes drug study over the past year from a company that didn't identify itself. Ms. Godsey, 63, doesn't suffer from the disease, but she has researched it on the Internet and donated to diabetes-related causes. "I don't know if it's just a coincidence or if they're somehow getting my information," says Ms. Godsey, who filed a complaint with the FTC this year.
The article notes that one recruitment company, Acurian, has been the subject of over 500 FTC complaints regarding its tactics. It’s clear that Big Data is just the latest buzzword lipstick on the telemarketing pig. And that’s the real shame of it.

We have arrived at an unprecedented opportunity for patients, researchers, and private industry to come together and discuss, as equals, research priorities and goals. Online patient communities like Inspire and PatientsLikeMe have created new mechanisms to share clinical trial opportunities and even create new studies. Dedicated disease advocates have jumped right into the world of clinical research, with groups like the Cystic Fibrosis Foundation and Michael J. Fox Foundation no longer content with raising research funds, but actively leading the design and operations of new studies.

Some – not yet enough – pharmaceutical companies have embraced the opportunity to work more openly and honestly with patient groups. The scandal of stories like this is not the Wizard of Oz histrionics of secret computer algorithms, but that we as an industry continue to take the low road and resort to questionable boiler room tactics.

It’s past time for the entire patient recruitment industry to drop the sleaze and move into the 21st century. I would hope that patient groups and researchers will come together as well to vigorously oppose these kinds of tactics when they encounter them.

(*According to the article, Acurian "has said that calls related to medical studies aren't advertisements as defined by law," so we can agree to call them "solicitations".)