Showing posts with label PLoS. Show all posts
Showing posts with label PLoS. Show all posts

Wednesday, December 4, 2013

Half of All Trials Unpublished*

(*For certain possibly nonstandard uses of the word "unpublished")

This is an odd little study. Instead of looking at registered trials and following them through to publication, this study starts with a random sample of phase 3 and 4 drug trials that already had results posted on ClinicalTrials.gov - so in one, very obvious sense, none of the trials in this study went unpublished.

Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals
Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, Philippe Ravaud



But here the authors are concerned with publication in medical journals, and they were only able to locate journal articles covering about half (297/594) of trials with registered results. 

It's hard to know what to make of these results, exactly. Some of the "missing" trials may be published in the future (a possibility the authors acknowledge), some may have been rejected by one or more journals (FDAAA requires posting the results to ClinicalTrials.gov, but it certainly doesn't require journals to accept trial reports), and some may be pre-FDAAA trials that sponsors have retroactively added to ClinicalTrials.gov even though development on the drug has ceased.

It would have been helpful had the authors reported journal publication rates stratified by the year the trials completed - this would have at least given us some hints regarding the above. More than anything I still find it absolutely bizarre that in a study this small, the entire dataset is not published for review.

One potential concern is the search methodology used by the authors to match posted and published trials. If the easy routes (link to article already provided in ClinicalTrials.gov, or NCT number found in a PubMed search) failed, a manual search was performed:
The articles identified through the search had to match the corresponding trial in terms of the information registered at ClinicalTrials.gov (i.e., same objective, same sample size, same primary outcome, same location, same responsible party, same trial phase, and same sponsor) and had to present results for the primary outcome. 
So it appears that a reviewed had to score the journal article as an exact match on 8 criteria in order for the trial to be considered the same. That could easily lead to exclusion of journal articles on the basis of very insubstantial differences. The authors provide no detail on this; and again, that would be easy to verify if the study dataset was published. 

The reason I harp on this, and worry about the matching methodology, is that two of the authors of this study were also involved in a methodologically opaque and flawed study about clinical trial results posted in the JCO. In that study, as well, the authors appeared to use an incorrect methodology to identify published clinical trials. When I pointed the issues out, the corresponding author merely reiterated what was already (insufficiently) in the paper's Methodology section.

I find it strange beyond belief, and more than a little hypocritical, that researchers would use a public, taxpayer-funded database as the basis of their studies, and yet refuse to provide their data for public review. There are no technological or logistical issues preventing this kind of sharing, and there is an obvious ethical point in favor of transparency.

But if the authors are reasonably close to correct in their results, I'm not sure what to make of this study. 

The Nature article covering this study contend that
[T]he [ClinicalTrials.gov] database was never meant to replace journal publications, which often contain longer descriptions of methods and results and are the basis for big reviews of research on a given drug.
I suppose that some journal articles have better methodology sections, although this is far from universally true (and, like this study here, these methods are often quite opaquely described and don't support replication). As for results, I don't believe that's the case. In this study, the opposite was true: ClinicalTrial.gov results were generally more complete than journal results. And I have no idea why the registry wouldn't surpass journals as a more reliable and complete source of information for "big reviews".

Perhaps it is a function of my love of getting my hands dirty digging into the data, but if we are witnessing a turning point where journal articles take a distant back seat to the ClinicalTrials.gov registry, I'm enthused. ClinicalTrials.gov is public, free, and contains structured data; journal articles are expensive, unparsable, and generally written in painfully unclear language. To me, there's really no contest. 

ResearchBlogging.org Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, & Philippe Ravaud (2013). Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals PLoS Medicine DOI: 10.1371/journal.pmed.1001566

Tuesday, September 18, 2012

Delivering the Placebic Payload


Two recent articles on placebo effects caught my attention. Although they come to the topic from very different angles, they both bear on the psychological mechanisms by which the placebo effect delivers its therapeutic payload, so it seems worthwhile to look at them together.
Placebo delivery: there's got to be a better way!

The first item is a write up of 2 small studies, Nonconscious activation of placebo and nocebo pain responses. (The article is behind a paywall at PNAS: if you can’t access it you can read this nice synopsis on Inkfish, or the press release issued by Beth Israel Deaconess (which includes bonus overhyping of the study’s impact by the authors).)

The studies’ premises were pretty straightforward: placebo effects are (at least in part) caused by conditioned responses. In addition, psychologists have demonstrated in a number of studies that many types of conditioned responses can be triggered subliminally.  Therefore, it might be possible, under certain circumstances, to elicit placebo/nocebo responses with nothing but subliminal stimuli.

And that, in effect, is what the studies demonstrate.  The first showed a placebo effect in patients who had been trained to associate various pain levels with pictures of specific faces. The second study elicited a (somewhat attenuated) placebo response even when those pictures were shown for a mere 12 milliseconds – below the threshold of conscious recognition. This gives us some preliminary evidence that placebo effects can be triggered through entirely subconscious mental processes.

Or does it? There seems to me to be some serious difficulties in making the leap from this highly-controlled lab experiment to the actual workings of placebos in clinical practice. First and foremost: to elicit subconscious effects, these experiments had to first provide quite a significant “pretreatment” of conscious, unambiguous conditioning to associate certain pain levels with specific images: 50 pain jolts in about 15 minutes.  Even then, the experimenters still felt the need to re-apply the explicit conditioning in 10% of the test cases, “to prevent extinction”.  This raises the obvious question: if even an intensive, explicit conditioning sequence can wear off that quickly, how are we to believe that a similar mechanism is acting in everyday clinical encounters, which are not so frequent and so explicit? The authors don’t seem to see an issue here, as they write:
Our results thereby translate the investigation of nonconscious effects to the clinical realm, by suggesting that health-related responses can be triggered by cues that are not consciously perceived, not only for pain … but also for other medical problems with demonstrated placebo effects, e.g., asthma, depression, and irritable bowel syndrome. Understanding the role of nonconscious processes in placebo/nocebo opens unique possibilities of enhancing clinical care by attending to the impact of nonconscious cues conveyed during the therapeutic encounter and improving therapeutic decisions.
So, the clinical relevance for these findings depends on how much you believe that precisely repeated blasts of pain faithfully replicate the effects of physician/patient interactions. I do not think I am being terribly skeptical when I say that I think clinical interactions are usually shorter and involve a lot more ambiguity – I am not even sure that this is a good model for placebo analgesia, and it certainly can’t be considered to have an lot of explanatory explanations for placebo effects in, eg, depression trials.

…Which brings me to the second article, a very different creature altogether.  It’s a blog post by Dike Drummond entitled Can digital medicine have a placebo effect? He actually comes very close to the study authors’ position in terms of ascribing placebo effects to subconscious processes:
The healing can occur without outside assistance — as the placebo effect in drug studies shows — or it can augment whatever medication or procedure you might also prescribe.  I believe it is the human qualities of attention and caring that trigger the placebo effect. These exist parallel to the provider’s ability to diagnose and select an appropriate medical treatment.
You can arrive at the correct diagnosis and treatment and not trigger a placebo effect. You can fail to make eye contact, write out a prescription, hand it to the patient and walk out the door.  Right answer — no placebo effect.  Your skills as a placebologist rely on the ability to create the expectation of healing in the patient. This is most definitely part of the art of medicine.
I will disagree a bit with Drummond on one point: if we could extinguish placebo effects merely by avoiding eye contact, or engaging in similar unsociable behavior, then we would see greatly reduced placebo effects in most clinical trials, since most sponsors do try to implement strategies to reduce those effects. In fact, there is some evidence that placebo effects are increasing in some trials. (Which, tangentially, makes me ask why pharmaceutical companies keep paying “expert consultants” to conduct training seminars on how to eliminate placebo effects … but that’s a rant for another day.)

Drummond ponders whether new technologies will be able to elicit placebo responses in patients, even in the complete absence of human-to-human interaction. I think the answer is “probably, somewhat”. We certainly have some evidence that physicians can increase placebo effects through explicit priming; it would seem logical that some of that work could be done by an iPad. Also, the part of the placebo effect that is patient-driven -- fed by their preexisting hopes and expectations – would seem to be transferrable to a non-personal interaction (after all, patients already derive placebic benefit from homeopathic and other ineffective over-the-counter cures with no physician, and minimal human, input).

The bottom line, I think, is this: we oversimplify the situation when we talk about “the” placebo effect. Placebo response in patients is a complex cluster of mechanisms, some or all of which are at play in each individual reaction. On the patient’s side, subconscious hope, conscious expectations, and learned associations are all in play, and may work with or against each other. The physician’s beliefs, transmitted through overt priming or subtle signals, can also work for or against the total placebo effect. There is even good evidence that placebo analgesia is produced through multiple distinct biochemical pathways, so proposing a single simple model to cover all placebo responses will be doomed to failure.

The consequence for clinical trialists? I do not think we need to start fretting over subliminal cues and secret subconscious signaling, but we do need to develop a more comprehensive method of measuring the impact of multiple environmental and patient factors in predicting response. The best way to accomplish this may be to implement prospective studies in parallel with existing treatment trials to get a clearer real-world picture of placebo response in action.

[Image: "Extraction of the Stone of Folly", Hieronymus Bosch, by way of Wikimedia Commons]

ResearchBlogging.org Karin B. Jensen, Ted J. Kaptchuk, Irving Kirsch, Jacqueline Raicek, Kara M. Lindstrom, Chantal Berna, Randy L. Gollub, Martin Ingvar, & Jian Kong (2012). Nonconscious activation of placebo and nocebo pain responses PNAS DOI: 10.1073/pnas.1202056109

Tuesday, July 10, 2012

Why Study Anything When You Already Know Everything?

If you’re a human being, in possession of one working, standard-issue human brain (and, for the remainder of this post, I’m going to assume you are), it is inevitable that you will fall victim to a wide variety of cognitive biases and mistakes.  Many of these biases result in our feeling much more certain about our knowledge of the world than we have any rational grounds for: from the Availability Heuristic, to the Dunning-Kruger Effect, to Confirmation Bias, there is an increasingly-well-documented system of ways in which we (and yes, that even includes you) become overconfident in our own judgment.

Over the years, scientists have developed a number of tools to help us overcome these biases in order to better understand the world.  In the biological sciences, one of our best tools is the randomized controlled trial (RCT).  In fact, randomization helps minimize biases so well that randomized trials have been suggested as a means of developing better governmental policy.

However, RCTs in general require an investment of time and money, and they need to be somewhat narrowly tailored.  As a result, they frequently become the target of people impatient with the process – especially those who perhaps feel themselves exempt from some of the above biases.

A shining example of this impatience-fortified-by-hubris can be
4 out of 5 Hammer Doctors agree:
the world is 98% nail.
found in a recent “Speaking of Medicine” blog post by Dr Trish Greenhalgh, with the mildly chilling title Less Research is Needed.  In it, the author finds a long list of things she feels to be so obvious that additional studies into them would be frivolous.  Among the things the author knows, beyond a doubt, is that patient education does not work, and electronic medical records are inefficient and unhelpful. 

I admit to being slightly in awe of Dr Greenhalgh’s omniscience in these matters. 

In addition to her “we already know the answer to this” argument, she also mixes in a completely different argument, which is more along the lines of “we’ll never know the answer to this”.  Of course, the upshot of that is identical: why bother conducting studies?  For this argument, she cites the example of coronary artery disease: since a large genomic study found only a small association with CAD heritability, Dr Greenhalgh tells us that any studies of different predictive methods is bound to fail and thus not worth the effort (she specifically mentions “genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables” as things she apparently already knows will not add anything to our understanding of CAD). 

As studies grow more global, and as we adapt to massive increases in computer storage and processing ability, I believe we will see an increase in this type of backlash.  And while physicians can generally be relied on to be at the forefront of the demand for more, not less, evidence, it is quite possible that a vocal minority of physicians will adopt this kind of strongly anti-research stance.  Dr Greenhalgh suggests that she is on the side of “thinking” when she opposes studies, but it is difficult to see this as anything more than an attempt to shut down critical inquiry in favor of deference to experts who are presumed to be fully-informed and bias-free. 

It is worthwhile for those of us engaged in trying to understand the world to be aware of these kinds of threats, and to take them seriously.  Dr Greenhalgh writes glowingly of a 10-year moratorium on research – presumably, we will all simply rely on her expertise to answer our important clinical questions.

Wednesday, January 4, 2012

Public Reporting of Patient Recruitment?

A few years back, I was working with a small biotech companies as they were ramping up to begin their first-ever pivotal trial. One of the team leads had just produced a timeline for enrollment in the trial, which was being circulated for feedback. Seeing as they had never conducted a trial of this size before, I was curious about how he had arrived at his estimate. My bigger clients had data from prior trials (both their own and their CRO’s) to use, but as far as I could tell, this client had absolutely nothing.

He proudly shared with me the secret of his methodology: he had looked up some comparable studies on ClinicalTrials.gov, counted the number of listed sites, and then compared that to the sample size and start/end dates to arrive at an enrollment rate for each study. He’d then used the average of all those rates to determine how long his study would take to complete.

If you’ve ever used ClinicalTrials.gov in your work, you can immediately determine the multiple, fatal flaws in that line of reasoning. The data simply doesn’t work like that. And to be fair, it wasn’t designed to work like that: the registry is intended to provide public access to what research is being done, not provide competitive intelligence on patient recruitment.

I’m therefore sympathetic, but skeptical, of a recent article in PLoS Medicine, Disclosure of Investigators' Recruitment Performance in Multicenter Clinical Trials: A Further Step for Research Transparency, that proposes to make reporting of enrollment a mandatory part of the trial registry. The authors would like to see not only actual randomized patients for each principal investigator, but also how that compares to their “recruitment target”.

The entire article is thought-provoking and worth a read. The authors’ main arguments in favor of mandatory recruitment reporting can be boiled down to:

  • Recruitment is many trials is poor, and public disclosure of recruitment performance will improve it
  • Sponsors, patient groups, and other stakeholders will be interested in the information
  • The data “could prompt queries” from other investigators

The first point is certainly the most compelling – improving enrollment in trials is at or near the top of everyone’s priority list – but the least supported by evidence. It is not clear to me that public scrutiny will lead to faster enrollment, and in fact in many cases it could quite conceivably lead to good investigators opting to not conduct a trial if they felt they risked being listed as “underperforming”. After all, there are many factors that will influence the total number of randomized patients at each site, and many of these are not under the PI’s control.

The other two points are true, in their way, but mandating that currently-proprietary information be given away to all competitors will certainly be resisted by industry. There are oceans of data that would be of interest to competitors, patient groups, and other investigators – that simply cannot be enough to justify mandating full public release.


Image: Philip Johnson's Glass House from Staib via Wikimedia Commons.