Wednesday, April 17, 2013

But WHY is There an App for That?


FDA should get out of the data entry business.

There’s an app for that!

We've all heard that more than enough times. It started as a line in an ad and has exploded into one of the top meme-mantras of our time: if your organization doesn't have an app, it would seem, you'd better get busy developing one.

Submitting your coffee shop review? Yes!
Submitting a serious med device problem? Less so!
So the fact that the FDA is promising to release a mobile app for physicians to report adverse events with devices is hardly shocking. But it is disappointing.

The current process for physicians and consumers to voluntarily submit adverse even information about drugs or medical devices is a bit cumbersome. The FDA's form 3500 requests quite a lot of contextual data: patient demographics, specifics of the problem, any lab tests or diagnostics that were run, and the eventual outcome. That makes sense, because it helps them to better understand the nature of the issue, and more data should provide a better ability spotting trends over time.

The drawback, of course, is that this makes data entry slower and more involved, which probably reduces the total number of adverse events reported – and, by most estimates, the number of reports is far lower than the total amount of actual events.

And that’s the problem: converting a data-entry-intensive paper or online activity into a data-entry-intensive mobile app activity just modernizes the hassle. In fact, it probably makes it worse, as entering large amounts of free-form text is not, shall we say, a strong point of mobile apps.

The solution here is for FDA to get itself out of the data entry business. Adverse event information – and the critical contextual data to go with it – already exist in a variety of data streams. Rather than asking physicians and patients to re-enter this data, FDA should be working on interfaces for them to transfer the data that’s already there. That means developing a robust set of Application Programming Interfaces (APIs) that can be used by the teams who are developing medical data apps – everything from hospital EMR systems, to physician reference apps, to patient medication and symptom tracking apps. Those applications are likely to have far more data inside them than FDA currently receives, so enabling more seamless transmission of that data should be a top priority.

(A simple analogy might be helpful here: when an application on your computer or phone crashes, the operating system generally bundles any diagnostic information together, then asks if you want to submit the error data to the manufacturer. FDA should be working with external developers on this type of “1-click” system rather that providing user-unfriendly forms to fill out.)

A couple other programs would seem to support this approach:

  • The congressionally-mandated Sentinel Initiative, which requires FDA to set up programs to tap into active data streams, such as insurance claims databases, to detect potential safety signals
  • A 2012 White House directive for all Federal agencies pursue the development of APIs as part of a broader "digital government" program

(Thanks to RF's Alec Gaffney for pointing out the White House directive.)

Perhaps FDA is already working on APIs for seamless adverse event reporting, but I could not find any evidence of their plans in this area. And even if they are, building a mobile app is still a waste of time and resources.

Sometimes being tech savvy means not jumping on the current tech trend: this is clearly one of those times. Let’s not have an app for that.

(Smartphone image via flikr user DigiEnable.)

Wednesday, February 27, 2013

It's Not Them, It's You

Are competing trials slowing yours down? Probably not.

If they don't like your trial, EVERYTHING ELSE IN
THE WORLD is competition for their attention.
Rahlyn Gossen has a provocative new blog post up on her website entitled "The Patient Recruitment Secret". In it, she makes a strong case for considering site commitment to a trial – in the form of their investment of time, effort, and interest – to be the single largest driver of patient enrollment.

The reasoning behind this idea is clear and quite persuasive:
Every clinical trial that is not yours is a competing clinical trial. 
Clinical research sites have finite resources. And with research sites being asked to take on more and more duties, those resources are only getting more strained. Here’s what this reality means for patient enrollment. 
If research site staff are working on other clinical trials, they are not working on your clinical trial. Nor are they working on patient recruitment for your clinical trial. To excel at patient enrollment, you need to maximize the time and energy that sites spend recruiting patients for your clinical trial.
Much of this fits together very nicely with a point I raised in a post a few months ago, showing that improvements in site enrollment performance may often be made at the expense of other trials.

However, I would add a qualifier to these discussions: the number of active "competing" trials at a site is not a reliable predictor of enrollment performance. In other words, selecting sites who are not working on a lot of other trials will in no way improve enrollment in your trial.

This is an important point because, as Gossen points out, asking the number of other studies is a standard habit of sponsors and CROs on site feasibility questionnaires. In fact, many sponsors can get very hung up on competing trials – to the point of excluding potentially good sites that they feel are working on too many other things.

This came to a head recently when we were brought in to consult on a study experiencing significant enrollment difficulty. The sponsor was very concerned about competing trials at the sites – there was a belief that such competition was a big contributor to sluggish enrollment.

As part of our analysis, we collected updated information on competitive trials. Given the staggered nature of the trial's startup, we then calculated time-adjusted Net Patient Contributions for each site (for more information on that, see my write-up here).

We then cross-referenced competing trials to enrollment performance. The results were very surprising: the quantity of other trials had no effect on how the sites were doing.  Here's the data:

Each site's enrollment performance as it relates to number of other trials it's running.
Competitive trials do not appear to substantially impact rates of enrollment.
 Each site is a point. Good sites (higher up) and poor enrollers (lower) are virtually identical in terms of how many concurrent trials they were running.

Since running into this result, I've looked at the relationship between the number of competing trials in CRO feasibility questionnaires and final site enrollment for many of the trials we've worked on. In each case, the "competing" trials did not serve as even a weak predictor of eventual site performance.

I agree with Gossen's fundamental point that a site's interest and enthusiasm for your trial will help increase enrollment at that site. However, we need to do a better job of thinking about the best ways of measuring that interest to understand the magnitude of the effect that it truly has. And, even more importantly, we have to avoid reliance on substandard proxy measurements such as "number of competing trials", because those will steer us wrong in site selection. In fact, almost everything we tend to collect on feasibility questionnaires appears to be non-predictive and potentially misleading; but that's a post for another day.

[Image credit: research distractions courtesy of Flikr user ronocdh.]

Friday, February 8, 2013

The FDA’s Magic Meeting


Can you shed three years of pipeline flab with this one simple trick?

"There’s no trick to it ... it’s just a simple trick!" -Brad Goodman

Getting a drug to market is hard. It is hard in every way a thing can be hard: it takes a long time, it's expensive, it involves a process that is opaque and frustrating, and failure is a much more likely outcome than success. Boston pioneers pointing their wagons west in 1820 had far better prospects for seeing the Pacific Ocean than a new drug, freshly launched into human trials, will ever have for earning a single dollar in sales.

Exact numbers are hard to come by, but the semi-official industry estimates are: about 6-8 years, a couple billion dollars, and more than 80% chance of ultimate failure.

Is there a secret handshake? Should we bring doughnuts?
(We should probably bring doughnuts.)
Finding ways to reduce any of those numbers is one of the premier obsessions of the pharma R&D world. We explore new technologies and standards, consider moving our trials to sites in other countries, consider skipping the sites altogether and going straight to the patient, and hire patient recruitment firms* to speed up trial enrollment. We even invent words to describe our latest and awesomest attempts at making development faster, better, and cheaper.

But perhaps all we needed was another meeting.

A recent blog post from Anne Pariser, an Associate Director at FDA's Center for Drug Evaluation and Research suggests that attending a pre-IND meeting can shave a whopping 3 years off your clinical development timeline:
For instance, for all new drugs approved between 2010 and 2012, the average clinical development time was more than 3 years faster when a pre-IND meeting was held than it was for drugs approved without a pre-IND meeting. 
For orphan drugs used to treat rare diseases, the development time for products with a pre-IND meeting was 6 years shorter on average or about half of what it was for those orphan drugs that did not have such a meeting.
That's it? A meeting? Cancel the massive CTMS integration – all we need are a couple tickets to DC?

Pariser's post appears to be an extension of an FDA presentation made at a joint NORD/DIA meeting last October. As far as I can tell, that presentation's not public, but it was covered by the Pink Sheet's Derrick Gingery on November 1.  That presentation covered just 2010 and 2011, and actually showed a 5 year benefit for drugs with pre-IND meetings (Pariser references 2010-2012).

Consider the fact that one VC-funded vendor** was recently spotted aggressively hyping the fact that its software reduced one trial’s timeline by 6 weeks. And here the FDA is telling us that a single sit-down saves an additional 150 weeks.

In addition, a second meeting – the End of Phase II meeting – saves another year, according to the NORD presentation.  Pariser does not include EOP2 data in her blog post.

So, time to charter a bus, load up the clinical and regulatory teams, and hit the road to Silver Spring?

Well, maybe. It probably couldn't hurt, and I'm sure it would be a great bonding experience, but there are some reasons to not take the numbers at face value.
  • We’re dealing with really small numbers here. The NORD presentation covers 54 drugs, and Pariser's appears to add 39 to that total. The fact that the time-savings data shifted so dramatically – from 5 years to 3 – tips us off to the fact that we probably have a lot of variance in the data. We also have no idea how many pre-IND meetings there were, so we don't know the relative sizes of the comparison groups.
  • It's a survivor-only data set. It doesn't include drugs that were terminated or rejected. FDA would never approve a clinical trial that only looked at patients who responded, then retroactively determined differences between them.  That approach is clearly susceptible to survivorship bias.
  • It reports means. This is especially a problem given the small numbers being studied. It's entirely plausible that just one or two drugs that took a really long time are badly skewing the results. Medians with quartile ranges would have been a lot more enlightening here.
All of the above make me question how big an impact this one meeting can really have. I'm sure it's a good thing, but it can't be quite this amazing, can it?

However, it would be great to see more of these metrics, produced in more detail, by the FDA. The agency does a pretty good job of reporting on its own performance – the PDUFA performance reports are a worthwhile read – but it doesn't publish much in the way of sponsor metrics. Given the constant clamor for new pathways and concessions from the FDA, it would be truly enlightening to see how well the industry is actually taking advantage of the tools it currently has.

As Gingery wrote in his article, "Data showing that the existing FDA processes, if used, can reduce development time is interesting given the strong effort by industry to create new methods to streamline the approval process." Gingery also notes that two new official sponsor-FDA meeting points have been added in the recently-passed FDASIA, so it would seem extremely worthwhile to have some ongoing, rigorous measurement of the usage of, and benefit from, these meetings.

Of course, even if these meetings are strongly associated with faster pipeline times, don’t be so sure that simply adding the meeting will cut your development so dramatically. Goodhart's Law tells us that performance metrics, when turned into targets, have a tendency to fail: in this case, whatever it was about the drug, or the drug company leadership, that prevented the meeting from happening in the first place may still prove to be the real factor in the delay.

I suppose the ultimate lesson here might be: If your drug doesn't have a pre-IND meeting because your executive management has the hubris to believe it doesn't need FDA input, then you probably need new executives more than you need a meeting.

[Image: Meeting pictured may not contain actual magic. Photo from FDA's Flikr stream.]

*  Disclosure: the author works for one of those.
** Under the theory that there is no such thing as bad publicity, no link will be provided.