Showing posts with label JAMIA. Show all posts
Showing posts with label JAMIA. Show all posts

Thursday, August 16, 2012

Clinical Trial Alerts: Nuisance or Annoyance?


Will physicians change their answers when tired of alerts?

I am an enormous fan of electronic health records (EMRs).  Or rather, more precisely, I am an enormous fan of what EMRs will someday become – current versions tend to leave a lot to be desired. Reaction to these systems among physicians I’ve spoken with has generally ranged from "annoying" to "*$%#^ annoying", and my experience does not seem to be at all unique.

The (eventual) promise of EMRs in identifying eligible clinical trial participants is twofold:

First, we should be able to query existing patient data to identify a set of patients who closely match the inclusion and exclusion criteria for a given clinical trial. In reality, however, many EMRs are not easy to query, and the data inside them isn’t as well-structured as you might think. (The phenomenon of "shovelware" – masses of paper records scanned and dumped into the system as quickly and cheaply as possible – has been greatly exacerbated by governments providing financial incentives for the immediate adoption of EMRs.)

Second, we should be able to identify potential patients when they’re physically at the clinic for a visit, which is really the best possible moment. Hence the Clinical Trial Alert (CTA): a pop-up or other notification within the EMR that the patient may be eligible for a trial. The major issue with CTAs is the annoyance factor – physicians tend to feel that they disrupt their natural clinical routine, making each patient visit less efficient. Multiple alerts per patient can be especially frustrating, resulting in "alert overload".

A very intriguing study recently in the Journal of the American Medical Informatics Association looked to measure a related issue: alert fatigue, or the tendency for CTAs to lose their effectiveness over time.  The response rate to the alerts definitely decreased steadily over time, but the authors were mildly optimistic in their assessment, noting that response rate was still respectable after 36 weeks – somewhere around 30%:


However, what really struck me here is that the referral rate – the rate at which the alert was triggered to bring in a research coordinator – dropped much more precipitously than the response rate:


This is remarkable considering that the alert consisted of only two yes/no questions. Answering either question was considered a "response", and answering "yes" to both questions was considered a "referral".

  • Did the patient have a stroke/TIA in the last 6 months?
  • Is the patient willing to undergo further screening with the research coordinator?

The only plausible explanation for referrals to drop faster than responses is that repeated exposure to the CTA lead the physicians to more frequently mark the patients as unwilling to participate. (This was not actual patient fatigue: the few patients who were the subject of multiple CTAs had their second alert removed from the analysis.)

So, it appears that some physicians remained nominally compliant with the system, but avoided the extra work involved in discussing a clinical trial option by simply marking the patient as uninterested. This has some interesting implications for how we track physician interaction with EMRs and CTAs, as basic compliance metrics may be undermined by users tending towards a path of least resistance.

ResearchBlogging.org Embi PJ, & Leonard AC (2012). Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. Journal of the American Medical Informatics Association : JAMIA, 19 (e1) PMID: 22534081