Wednesday, May 15, 2013

Placebos: Banned in Helsinki?


One of the unintended consequences of my (admittedly, somewhat impulsive) decision to name this blog is that I get a fair bit of traffic from Google: people searching for placebo-related information.

Some recent searches have been about the proposed new revisions to the Declaration of Helsinki, and how the new draft version will prohibit or restrict the use of placebo controls in clinical trials. This was a bit puzzling, given that the publicly-released draft revisions [PDF] didn't appear to substantially change the DoH's placebo section.

Much of the confusion appears to be caused by a couple sources. First, the popular Pharmalot blog (whose approach to critical analysis I've noted before as being ... well ... occasionally unenthusiastic) covered it thus:
The draft, which was released earlier this week, is designed to update a version that was adopted in 2008 and many of the changes focus on the use of placebos. For instance, placebos are only permitted when no proven intervention exists; patients will not be subject to any risk or there must be ‘compelling and sound methodological reasons’ for using a placebo or less effective treatment.
This isn't a good summary of the changes, since the “for instance” items are for the most part slight re-wordings from the 2008 version, which itself didn't change much from the version adopted in 2000.

To see what I mean, take a look at the change-tracked version of the placebo section:
The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention(s), except in the following circumstances: 
The use of placebo, or no treatment intervention is acceptable in studies where no current proven intervention exists; or 
Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, placebo or no treatment is necessary to determine the efficacy or safety of an intervention 
and the patients who receive any intervention less effective than the best proven one, placebo or no treatment will not be subject to any additional risks of serious or irreversible harm as a result of not receiving the best proven intervention 
Extreme care must be taken to avoid abuse of this option.
Really, there is only one significant change to this section: the strengthening of the existing reference to “best proven intervention” in the first sentence. It was already there, but has now been added to sentences 3 and 4. This is a reference to the use of active (non-placebo) comparators that are not the “best proven” intervention.

So, ironically, the biggest change to the placebo section is not about placebos at all.

This is a bit unfortunate, because to me it subtracts from the overall clarity of the section, since it's no longer exclusively about placebo despite still being titled “Use of Placebo”. The DoH has been consistently criticized during previous rounds of revision for becoming progressively less organized and coherently structured, and it certainly reads like a rambling list of semi-related thoughts – a classic “document by committee”. This lack of structure and clarity certainly hurt the DoH's effectiveness in shaping the world's approach to ethical clinical research.

Even worse, the revisions continue to leave unresolved the very real divisions that exist in ethical beliefs about placebo use in trials. The really dramatic revision to the placebo section happened over a decade ago, with the 2000 revision. Those changes, which introduced much of the strict wording in the current version, were extremely controversial, and resulted in the issuance of an extraordinary “Note of Clarification” that effectively softened the new and inflexible language. The 2008 version absorbed the wording from the Note of Clarification, and the resulting document is now vague enough that it is interpreted quite differently in different countries. (For more on the revision history and controversy, see this comprehensive review.)

The 2013 revision could have been an opportunity to try again to build a consensus around placebo use. At the very least, it could have acknowledged and clarified the division of beliefs on the topic. Instead, it sticks to its ambiguous phrasing which will continue to support multiple conflicting interpretations. This does not serve the ends of assuring the ethical conduct of clinical trials.

Ezekiel Emmanuel has been a long-time critic of the DoH's lack of clarity and structure. Earlier this month, he published a compact but forceful review of the ways in which the Declaration has become weakened by its long series of revisions:
Over the years problems with, and objections to, the document have accumulated. I propose that there are nine distinct problems with the current version of the Declaration of Helsinki: it has an incoherent structure; it confuses medical care and research; it addresses the wrong audience; it makes extraneous ethical provisions; it includes contradictions; it contains unnecessary repetitions; it uses multiple and poor phrasings; it includes excessive details; and it makes unjustified, unethical recommendations.
Importantly, Emmanuel also includes a proposed revision and restructuring of the DoH. In his version, much of the current wording around placebo use is retained, but it is absorbed into the larger concept of “Scientific Validity”, which adds important context to the decision about how to decide on a comparator arm in general.

Here is Emmanuel’s suggested revision:
Scientific Validity:  Research in biomedical and other sciences involving human participants must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and suitable laboratory, and as necessary, animal experimentation.  Research must be conducted in a manner that will produce reliable and valid data.  To produce meaningful and valid data new interventions should be tested against the best current proven intervention. Sometimes it will be appropriate to test new interventions against placebo, or no treatment, when there is no current proven intervention or, where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy and/or safety of an intervention and the patients who receive placebo, or no treatment, will not be subject to excessive risk or serious irreversible harm.  This option should not be abused.
Here, the scientific rationale for the use of placebo is placed in the greater context of selecting a control arm, which is itself subservient to the ethical imperative to only conduct studies that are scientifically valid. One can quibble with the wording (I still have issues with the use of “best proven” interventions, which I think is much too undefined here, as it is in the DoH, and glosses over some significant problems), but structurally this is a lot stronger, and provides firmer grounding for ethical decision making.

ResearchBlogging.org Emanuel, E. (2013). Reconsidering the Declaration of Helsinki The Lancet, 381 (9877), 1532-1533 DOI: 10.1016/S0140-6736(13)60970-8






[Image: Extra-strength chill pill, modified by the author, based on an original image by Flikr user mirjoran.]

Wednesday, April 17, 2013

But WHY is There an App for That?


FDA should get out of the data entry business.

There’s an app for that!

We've all heard that more than enough times. It started as a line in an ad and has exploded into one of the top meme-mantras of our time: if your organization doesn't have an app, it would seem, you'd better get busy developing one.

Submitting your coffee shop review? Yes!
Submitting a serious med device problem? Less so!
So the fact that the FDA is promising to release a mobile app for physicians to report adverse events with devices is hardly shocking. But it is disappointing.

The current process for physicians and consumers to voluntarily submit adverse even information about drugs or medical devices is a bit cumbersome. The FDA's form 3500 requests quite a lot of contextual data: patient demographics, specifics of the problem, any lab tests or diagnostics that were run, and the eventual outcome. That makes sense, because it helps them to better understand the nature of the issue, and more data should provide a better ability spotting trends over time.

The drawback, of course, is that this makes data entry slower and more involved, which probably reduces the total number of adverse events reported – and, by most estimates, the number of reports is far lower than the total amount of actual events.

And that’s the problem: converting a data-entry-intensive paper or online activity into a data-entry-intensive mobile app activity just modernizes the hassle. In fact, it probably makes it worse, as entering large amounts of free-form text is not, shall we say, a strong point of mobile apps.

The solution here is for FDA to get itself out of the data entry business. Adverse event information – and the critical contextual data to go with it – already exist in a variety of data streams. Rather than asking physicians and patients to re-enter this data, FDA should be working on interfaces for them to transfer the data that’s already there. That means developing a robust set of Application Programming Interfaces (APIs) that can be used by the teams who are developing medical data apps – everything from hospital EMR systems, to physician reference apps, to patient medication and symptom tracking apps. Those applications are likely to have far more data inside them than FDA currently receives, so enabling more seamless transmission of that data should be a top priority.

(A simple analogy might be helpful here: when an application on your computer or phone crashes, the operating system generally bundles any diagnostic information together, then asks if you want to submit the error data to the manufacturer. FDA should be working with external developers on this type of “1-click” system rather that providing user-unfriendly forms to fill out.)

A couple other programs would seem to support this approach:

  • The congressionally-mandated Sentinel Initiative, which requires FDA to set up programs to tap into active data streams, such as insurance claims databases, to detect potential safety signals
  • A 2012 White House directive for all Federal agencies pursue the development of APIs as part of a broader "digital government" program

(Thanks to RF's Alec Gaffney for pointing out the White House directive.)

Perhaps FDA is already working on APIs for seamless adverse event reporting, but I could not find any evidence of their plans in this area. And even if they are, building a mobile app is still a waste of time and resources.

Sometimes being tech savvy means not jumping on the current tech trend: this is clearly one of those times. Let’s not have an app for that.

(Smartphone image via flikr user DigiEnable.)

Wednesday, February 27, 2013

It's Not Them, It's You

Are competing trials slowing yours down? Probably not.

If they don't like your trial, EVERYTHING ELSE IN
THE WORLD is competition for their attention.
Rahlyn Gossen has a provocative new blog post up on her website entitled "The Patient Recruitment Secret". In it, she makes a strong case for considering site commitment to a trial – in the form of their investment of time, effort, and interest – to be the single largest driver of patient enrollment.

The reasoning behind this idea is clear and quite persuasive:
Every clinical trial that is not yours is a competing clinical trial. 
Clinical research sites have finite resources. And with research sites being asked to take on more and more duties, those resources are only getting more strained. Here’s what this reality means for patient enrollment. 
If research site staff are working on other clinical trials, they are not working on your clinical trial. Nor are they working on patient recruitment for your clinical trial. To excel at patient enrollment, you need to maximize the time and energy that sites spend recruiting patients for your clinical trial.
Much of this fits together very nicely with a point I raised in a post a few months ago, showing that improvements in site enrollment performance may often be made at the expense of other trials.

However, I would add a qualifier to these discussions: the number of active "competing" trials at a site is not a reliable predictor of enrollment performance. In other words, selecting sites who are not working on a lot of other trials will in no way improve enrollment in your trial.

This is an important point because, as Gossen points out, asking the number of other studies is a standard habit of sponsors and CROs on site feasibility questionnaires. In fact, many sponsors can get very hung up on competing trials – to the point of excluding potentially good sites that they feel are working on too many other things.

This came to a head recently when we were brought in to consult on a study experiencing significant enrollment difficulty. The sponsor was very concerned about competing trials at the sites – there was a belief that such competition was a big contributor to sluggish enrollment.

As part of our analysis, we collected updated information on competitive trials. Given the staggered nature of the trial's startup, we then calculated time-adjusted Net Patient Contributions for each site (for more information on that, see my write-up here).

We then cross-referenced competing trials to enrollment performance. The results were very surprising: the quantity of other trials had no effect on how the sites were doing.  Here's the data:

Each site's enrollment performance as it relates to number of other trials it's running.
Competitive trials do not appear to substantially impact rates of enrollment.
 Each site is a point. Good sites (higher up) and poor enrollers (lower) are virtually identical in terms of how many concurrent trials they were running.

Since running into this result, I've looked at the relationship between the number of competing trials in CRO feasibility questionnaires and final site enrollment for many of the trials we've worked on. In each case, the "competing" trials did not serve as even a weak predictor of eventual site performance.

I agree with Gossen's fundamental point that a site's interest and enthusiasm for your trial will help increase enrollment at that site. However, we need to do a better job of thinking about the best ways of measuring that interest to understand the magnitude of the effect that it truly has. And, even more importantly, we have to avoid reliance on substandard proxy measurements such as "number of competing trials", because those will steer us wrong in site selection. In fact, almost everything we tend to collect on feasibility questionnaires appears to be non-predictive and potentially misleading; but that's a post for another day.

[Image credit: research distractions courtesy of Flikr user ronocdh.]