Showing posts with label EMR. Show all posts
Showing posts with label EMR. Show all posts

Wednesday, April 17, 2013

But WHY is There an App for That?


FDA should get out of the data entry business.

There’s an app for that!

We've all heard that more than enough times. It started as a line in an ad and has exploded into one of the top meme-mantras of our time: if your organization doesn't have an app, it would seem, you'd better get busy developing one.

Submitting your coffee shop review? Yes!
Submitting a serious med device problem? Less so!
So the fact that the FDA is promising to release a mobile app for physicians to report adverse events with devices is hardly shocking. But it is disappointing.

The current process for physicians and consumers to voluntarily submit adverse even information about drugs or medical devices is a bit cumbersome. The FDA's form 3500 requests quite a lot of contextual data: patient demographics, specifics of the problem, any lab tests or diagnostics that were run, and the eventual outcome. That makes sense, because it helps them to better understand the nature of the issue, and more data should provide a better ability spotting trends over time.

The drawback, of course, is that this makes data entry slower and more involved, which probably reduces the total number of adverse events reported – and, by most estimates, the number of reports is far lower than the total amount of actual events.

And that’s the problem: converting a data-entry-intensive paper or online activity into a data-entry-intensive mobile app activity just modernizes the hassle. In fact, it probably makes it worse, as entering large amounts of free-form text is not, shall we say, a strong point of mobile apps.

The solution here is for FDA to get itself out of the data entry business. Adverse event information – and the critical contextual data to go with it – already exist in a variety of data streams. Rather than asking physicians and patients to re-enter this data, FDA should be working on interfaces for them to transfer the data that’s already there. That means developing a robust set of Application Programming Interfaces (APIs) that can be used by the teams who are developing medical data apps – everything from hospital EMR systems, to physician reference apps, to patient medication and symptom tracking apps. Those applications are likely to have far more data inside them than FDA currently receives, so enabling more seamless transmission of that data should be a top priority.

(A simple analogy might be helpful here: when an application on your computer or phone crashes, the operating system generally bundles any diagnostic information together, then asks if you want to submit the error data to the manufacturer. FDA should be working with external developers on this type of “1-click” system rather that providing user-unfriendly forms to fill out.)

A couple other programs would seem to support this approach:

  • The congressionally-mandated Sentinel Initiative, which requires FDA to set up programs to tap into active data streams, such as insurance claims databases, to detect potential safety signals
  • A 2012 White House directive for all Federal agencies pursue the development of APIs as part of a broader "digital government" program

(Thanks to RF's Alec Gaffney for pointing out the White House directive.)

Perhaps FDA is already working on APIs for seamless adverse event reporting, but I could not find any evidence of their plans in this area. And even if they are, building a mobile app is still a waste of time and resources.

Sometimes being tech savvy means not jumping on the current tech trend: this is clearly one of those times. Let’s not have an app for that.

(Smartphone image via flikr user DigiEnable.)

Monday, January 14, 2013

Magical Thinking in Clinical Trial Enrollment


The many flavors of wish-based patient recruitment.

[Hopefully-obvious disclosure: I work in the field of clinical trial enrollment.]

When I'm discussing and recommending patient recruitment strategies with prospective clients, there is only one serious competitor I'm working against. I do not tailor my presentations in reaction to what other Patient Recruitment Organizations are saying, because they're not usually the thing that causes me the most problems. In almost all cases, when we lose out on a new study opportunity, we have lost to one opponent:

Need patients? Just add water!
Magical thinking.

Magical thinking comes in many forms, but in clinical trial enrollment it traditionally has two dominant flavors:

  • We won’t have any problems with enrollment because we have made it a priority within our organization.
    (This translates to: "we want it to happen, therefore it has to happen, therefore it will happen", but it doesn't sound quite as convincing that way, does it?)
  • We have selected sites that already have access to a large number of the patients we need.
    (I hear this pretty much 100% of the time. Even from people who understand that every trial is different and that past site performance is simply not a great predictor of future performance.)

A new form of magical thinking burst onto the scene a few years ago: the belief that the Internet will enable us to target and engage exactly the right patients. Specifically, some teams (aided by the, shall we say, less-than-completely-totally-true claims of "expert" vendors) began to believe that the web’s great capacity to narrowly target specific people – through Google search advertising, online patient communities, and general social media activities – would prove more than enough to deliver large numbers of trial participants. And deliver them fast and cheap to boot. Sadly evidence has already started to emerge about the Internet’s failure to be a panacea for slow enrollment. As I and others have pointed out, online recruitment can certainly be cost effective, but cannot be relied on to generate a sizable response. As a sole source, it tends to underdeliver even for small trials.

I think we are now seeing the emergence of the newest flavor of magical thinking: Big Data. Take this quote from recent coverage of the JP Morgan Healthcare Conference:
For instance, Phase II, that ever-vexing rubber-road matchmaker for promising compounds that just might be worthless. Identifying the right patients for the right drug can make or break a Phase II trial, [John] Reynders said, and Big Data can come in handy as investigators distill mountains of imaging results, disease progression readings and genotypic traits to find their target participants. 
The prospect of widespread genetic mapping coupled with the power of Big Data could fundamentally change how biotech does R&D, [Alexis] Borisy said. "Imagine having 1 million cancer patients profiled with data sets available and accessible," he said. "Think how that very large data set might work--imagine its impact on what development looks like. You just look at the database and immediately enroll a trial of ideal patients."
Did you follow the logic of that last sentence? You immediately enroll ideal patients ... and all you had to do was look at a database! Problem solved!

Before you go rushing off to get your company some Big Data, please consider the fact that the overwhelming majority of Phase 2 trials do not have a neat, predefined set of genotypic traits they’re looking to enroll. In fact, narrowly-tailored phase 2 trials (such as recent registration trials of Xalkori and Zelboraf) actually enroll very quickly already, without the need for big databases. The reality for most drugs is exactly the opposite: they enter phase 2 actively looking for signals that will help identify subgroups that benefit from the treatment.

Also, it’s worth pointing out that having a million data points in a database does not mean that you have a million qualified, interested, and nearby patients just waiting to be enrolled in your trial. As recent work in medical record queries bears out, the yield from these databases promises to be low, and there are enormous logistic, regulatory, and personal challenges in identifying, engaging, and consenting the actual human beings represented by the data.

More, even fresher flavors of magical thinking are sure to emerge over time. Our urge to hope that our problems will just be washed away in a wave of cool new technology is just too powerful to resist.

However, when the trial is important, and the costs of delay are high, clinical teams need to set the wishful thinking aside and ask for a thoughtful plan based on hard evidence. Fortunately, that requires no magic bean purchase.

Magic Beans picture courtesy of Flikr user sleepyneko

Thursday, August 16, 2012

Clinical Trial Alerts: Nuisance or Annoyance?


Will physicians change their answers when tired of alerts?

I am an enormous fan of electronic health records (EMRs).  Or rather, more precisely, I am an enormous fan of what EMRs will someday become – current versions tend to leave a lot to be desired. Reaction to these systems among physicians I’ve spoken with has generally ranged from "annoying" to "*$%#^ annoying", and my experience does not seem to be at all unique.

The (eventual) promise of EMRs in identifying eligible clinical trial participants is twofold:

First, we should be able to query existing patient data to identify a set of patients who closely match the inclusion and exclusion criteria for a given clinical trial. In reality, however, many EMRs are not easy to query, and the data inside them isn’t as well-structured as you might think. (The phenomenon of "shovelware" – masses of paper records scanned and dumped into the system as quickly and cheaply as possible – has been greatly exacerbated by governments providing financial incentives for the immediate adoption of EMRs.)

Second, we should be able to identify potential patients when they’re physically at the clinic for a visit, which is really the best possible moment. Hence the Clinical Trial Alert (CTA): a pop-up or other notification within the EMR that the patient may be eligible for a trial. The major issue with CTAs is the annoyance factor – physicians tend to feel that they disrupt their natural clinical routine, making each patient visit less efficient. Multiple alerts per patient can be especially frustrating, resulting in "alert overload".

A very intriguing study recently in the Journal of the American Medical Informatics Association looked to measure a related issue: alert fatigue, or the tendency for CTAs to lose their effectiveness over time.  The response rate to the alerts definitely decreased steadily over time, but the authors were mildly optimistic in their assessment, noting that response rate was still respectable after 36 weeks – somewhere around 30%:


However, what really struck me here is that the referral rate – the rate at which the alert was triggered to bring in a research coordinator – dropped much more precipitously than the response rate:


This is remarkable considering that the alert consisted of only two yes/no questions. Answering either question was considered a "response", and answering "yes" to both questions was considered a "referral".

  • Did the patient have a stroke/TIA in the last 6 months?
  • Is the patient willing to undergo further screening with the research coordinator?

The only plausible explanation for referrals to drop faster than responses is that repeated exposure to the CTA lead the physicians to more frequently mark the patients as unwilling to participate. (This was not actual patient fatigue: the few patients who were the subject of multiple CTAs had their second alert removed from the analysis.)

So, it appears that some physicians remained nominally compliant with the system, but avoided the extra work involved in discussing a clinical trial option by simply marking the patient as uninterested. This has some interesting implications for how we track physician interaction with EMRs and CTAs, as basic compliance metrics may be undermined by users tending towards a path of least resistance.

ResearchBlogging.org Embi PJ, & Leonard AC (2012). Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. Journal of the American Medical Informatics Association : JAMIA, 19 (e1) PMID: 22534081