Thursday, August 16, 2012

Clinical Trial Alerts: Nuisance or Annoyance?


Will physicians change their answers when tired of alerts?

I am an enormous fan of electronic health records (EMRs).  Or rather, more precisely, I am an enormous fan of what EMRs will someday become – current versions tend to leave a lot to be desired. Reaction to these systems among physicians I’ve spoken with has generally ranged from "annoying" to "*$%#^ annoying", and my experience does not seem to be at all unique.

The (eventual) promise of EMRs in identifying eligible clinical trial participants is twofold:

First, we should be able to query existing patient data to identify a set of patients who closely match the inclusion and exclusion criteria for a given clinical trial. In reality, however, many EMRs are not easy to query, and the data inside them isn’t as well-structured as you might think. (The phenomenon of "shovelware" – masses of paper records scanned and dumped into the system as quickly and cheaply as possible – has been greatly exacerbated by governments providing financial incentives for the immediate adoption of EMRs.)

Second, we should be able to identify potential patients when they’re physically at the clinic for a visit, which is really the best possible moment. Hence the Clinical Trial Alert (CTA): a pop-up or other notification within the EMR that the patient may be eligible for a trial. The major issue with CTAs is the annoyance factor – physicians tend to feel that they disrupt their natural clinical routine, making each patient visit less efficient. Multiple alerts per patient can be especially frustrating, resulting in "alert overload".

A very intriguing study recently in the Journal of the American Medical Informatics Association looked to measure a related issue: alert fatigue, or the tendency for CTAs to lose their effectiveness over time.  The response rate to the alerts definitely decreased steadily over time, but the authors were mildly optimistic in their assessment, noting that response rate was still respectable after 36 weeks – somewhere around 30%:


However, what really struck me here is that the referral rate – the rate at which the alert was triggered to bring in a research coordinator – dropped much more precipitously than the response rate:


This is remarkable considering that the alert consisted of only two yes/no questions. Answering either question was considered a "response", and answering "yes" to both questions was considered a "referral".

  • Did the patient have a stroke/TIA in the last 6 months?
  • Is the patient willing to undergo further screening with the research coordinator?

The only plausible explanation for referrals to drop faster than responses is that repeated exposure to the CTA lead the physicians to more frequently mark the patients as unwilling to participate. (This was not actual patient fatigue: the few patients who were the subject of multiple CTAs had their second alert removed from the analysis.)

So, it appears that some physicians remained nominally compliant with the system, but avoided the extra work involved in discussing a clinical trial option by simply marking the patient as uninterested. This has some interesting implications for how we track physician interaction with EMRs and CTAs, as basic compliance metrics may be undermined by users tending towards a path of least resistance.

ResearchBlogging.org Embi PJ, & Leonard AC (2012). Evaluating alert fatigue over time to EHR-based clinical trial alerts: findings from a randomized controlled study. Journal of the American Medical Informatics Association : JAMIA, 19 (e1) PMID: 22534081

Monday, August 13, 2012

Most* Clinical Trials Are Too** Small

* for some value of "most"
** for some value of "too"


[Note: this is a companion to a previous post, Clouding the Debate on Clinical Trials: Pediatric Edition.]

Are many current clinical trials underpowered? That is, will they not enroll enough patients to adequately answer the research question they were designed to answer? Are we wasting time and money – and even worse, the time and effort of researchers and patient-volunteers – by conducting research that is essentially doomed to produce clinically useless results?

That is the alarming upshot of the coverage on a recent study published in the Journal of the American Medical Association. This Duke Medicine News article was the most damning in its denunciation of the current state of clinical research:
Duke: Mega-Trial experts concerned
that not enough trials are mega-trials
Large-Scale Analysis Finds Majority of Clinical Trials Don't Provide Meaningful Evidence

The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making.
The study was also was also covered in many industry publications, as well as the mainstream news. Those stories were less sweeping in their indictment of the "clinical trial enterprise", but carried the same main theme: that an "analysis" had determined that most current clinical trial were "too small".

I have only one quibble with this coverage: the study in question didn’t demonstrate any of these points. At all.

The study is a simple listing of gross characteristics of interventional trials registered over a 6 year period. It is entirely descriptive, and limits itself entirely to data entered by the trial sponsor as part of the registration on ClinicalTrials.gov. It contains no information on the quality of the trials themselves.

That last part can’t be emphasized enough: the study contains no quality benchmarks. No analysis of trial design. No benchmarking of the completeness or accuracy of the data collected. No assessment of the clinical utility of the evidence produced. Nothing like that at all.

So, the question that nags at me is: how did we get from A to B? How did this mildly-interesting-and-entirely-descriptive data listing transform into a wholesale (and entirely inaccurate) denunciation of clinical research?

For starters, the JAMA authors divide registered trials into 3 enrollment groups: 1-100, 101-1000, and >1000. I suppose this is fine, although it should be noted that it is entirely arbitrary – there is no particular reason to divide things up this way, except perhaps a fondness for neat round numbers.

Trials within the first group are then labeled "small". No effort is made to explain why 100 patients represents a clinically important break point, but the authors feel confident to conclude that clinical research is "dominated by small clinical trials", because 62% of registered trials fit into this newly-invented category. From there, all you need is a completely vague yet ominous quote from the lead author. As US News put it:
The new report says 62 percent of the trials from 2007-2010 were small, with 100 or fewer participants. Only 4 percent had more than 1,000 participants.

"There are 330 new clinical trials being registered every week, and a number of them are very small and probably not as high quality as they could be," [lead author Dr Robert] Califf said.
"Probably not as high quality as they could be", while just vague enough to be unfalsifiable, is also not at all a consequence of the data as reported. So, through a chain of arbitrary decisions and innuendo, "less than 100" becomes "small" becomes "too small" becomes "of low quality".

Califf’s institution, Duke, appears to be particularly guilty of driving this evidence-free overinterpretation of the data, as seen in the sensationalistic headline and lede quoted above. However, it’s clear that Califf himself is blurring the distinction between what his study showed and what it didn’t:
"Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others," says Califf. "For example, 96 percent of clinical trials have ≤1000 participants, and 62 percent have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors, and consumers about the choices they must make to prevent and treat disease."
Maybe he’s right that these small studies will not be able to inform patients and doctors, but his study has provided absolutely no support for that statement.

When we build a protocol, there are actually only 3 major factors that go into determining how many patients we want to enroll:
  1. How big a difference we estimate the intervention will have compared to a control (the effect size)
  2. How much risk we’ll accept that we’ll get a false-positive (alpha) or false-negative (beta) result
  3. Occasionally, whether we need to add participants to better characterize safety and tolerability (as is frequently, and quite reasonably, requested by FDA and other regulators)
Quantity is not quality: enrolling too many participants in an investigational trial is unethical and a waste of resources. If the numbers determine that we should randomize 80 patients, it would make absolutely no sense to randomize 21 more so that the trial is no longer "too small". Those 21 participants could be enrolled in another trial, to answer another worthwhile question.

So the answer to "how big should a trial be?" is "exactly as big as it needs to be." Taking descriptive statistics and applying normative categories to them is unhelpful, and does not make for better research policy.


ResearchBlogging.org Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, & Tasneem A (2012). Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010. JAMA : the journal of the American Medical Association, 307 (17), 1838-47 PMID: 22550198

Wednesday, August 8, 2012

Testing Transparency with the TEST Act

A quick update on my last post regarding the enormously controversial -- but completely unmentioned -- requirement to publicly report all versions of clinical trial protocols on ClinicalTrials.gov: The New England Journal of Medicine has weighed in with an editorial strongly in support of the TEST Act. 

NEJM Editor-in-Chief Jeffrey Drazen at least mentions the supporting documents requirement, but only in part of one sentence, where he confusingly refers to the act "extending results reporting to include the deposition of consent and protocol documents approved by institutional review boards." The word "deposition" does not suggest actual publication, which the act clearly requires. 

I don't think this qualifies as an improvement in transparency about the impact the TEST Act, as written, would have. I'm not surprised when a trade publication like Center Watch recycles a press release into a news item. However, it wouldn't seem like too much to ask that NEJM editorials aspire to a moderately higher standard of critical inquiry.