Showing posts with label placebo effect. Show all posts
Showing posts with label placebo effect. Show all posts

Tuesday, September 18, 2012

Delivering the Placebic Payload

Two recent articles on placebo effects caught my attention. Although they come to the topic from very different angles, they both bear on the psychological mechanisms by which the placebo effect delivers its therapeutic payload, so it seems worthwhile to look at them together.
Placebo delivery: there's got to be a better way!

The first item is a write up of 2 small studies, Nonconscious activation of placebo and nocebo pain responses. (The article is behind a paywall at PNAS: if you can’t access it you can read this nice synopsis on Inkfish, or the press release issued by Beth Israel Deaconess (which includes bonus overhyping of the study’s impact by the authors).)

The studies’ premises were pretty straightforward: placebo effects are (at least in part) caused by conditioned responses. In addition, psychologists have demonstrated in a number of studies that many types of conditioned responses can be triggered subliminally.  Therefore, it might be possible, under certain circumstances, to elicit placebo/nocebo responses with nothing but subliminal stimuli.

And that, in effect, is what the studies demonstrate.  The first showed a placebo effect in patients who had been trained to associate various pain levels with pictures of specific faces. The second study elicited a (somewhat attenuated) placebo response even when those pictures were shown for a mere 12 milliseconds – below the threshold of conscious recognition. This gives us some preliminary evidence that placebo effects can be triggered through entirely subconscious mental processes.

Or does it? There seems to me to be some serious difficulties in making the leap from this highly-controlled lab experiment to the actual workings of placebos in clinical practice. First and foremost: to elicit subconscious effects, these experiments had to first provide quite a significant “pretreatment” of conscious, unambiguous conditioning to associate certain pain levels with specific images: 50 pain jolts in about 15 minutes.  Even then, the experimenters still felt the need to re-apply the explicit conditioning in 10% of the test cases, “to prevent extinction”.  This raises the obvious question: if even an intensive, explicit conditioning sequence can wear off that quickly, how are we to believe that a similar mechanism is acting in everyday clinical encounters, which are not so frequent and so explicit? The authors don’t seem to see an issue here, as they write:
Our results thereby translate the investigation of nonconscious effects to the clinical realm, by suggesting that health-related responses can be triggered by cues that are not consciously perceived, not only for pain … but also for other medical problems with demonstrated placebo effects, e.g., asthma, depression, and irritable bowel syndrome. Understanding the role of nonconscious processes in placebo/nocebo opens unique possibilities of enhancing clinical care by attending to the impact of nonconscious cues conveyed during the therapeutic encounter and improving therapeutic decisions.
So, the clinical relevance for these findings depends on how much you believe that precisely repeated blasts of pain faithfully replicate the effects of physician/patient interactions. I do not think I am being terribly skeptical when I say that I think clinical interactions are usually shorter and involve a lot more ambiguity – I am not even sure that this is a good model for placebo analgesia, and it certainly can’t be considered to have an lot of explanatory explanations for placebo effects in, eg, depression trials.

…Which brings me to the second article, a very different creature altogether.  It’s a blog post by Dike Drummond entitled Can digital medicine have a placebo effect? He actually comes very close to the study authors’ position in terms of ascribing placebo effects to subconscious processes:
The healing can occur without outside assistance — as the placebo effect in drug studies shows — or it can augment whatever medication or procedure you might also prescribe.  I believe it is the human qualities of attention and caring that trigger the placebo effect. These exist parallel to the provider’s ability to diagnose and select an appropriate medical treatment.
You can arrive at the correct diagnosis and treatment and not trigger a placebo effect. You can fail to make eye contact, write out a prescription, hand it to the patient and walk out the door.  Right answer — no placebo effect.  Your skills as a placebologist rely on the ability to create the expectation of healing in the patient. This is most definitely part of the art of medicine.
I will disagree a bit with Drummond on one point: if we could extinguish placebo effects merely by avoiding eye contact, or engaging in similar unsociable behavior, then we would see greatly reduced placebo effects in most clinical trials, since most sponsors do try to implement strategies to reduce those effects. In fact, there is some evidence that placebo effects are increasing in some trials. (Which, tangentially, makes me ask why pharmaceutical companies keep paying “expert consultants” to conduct training seminars on how to eliminate placebo effects … but that’s a rant for another day.)

Drummond ponders whether new technologies will be able to elicit placebo responses in patients, even in the complete absence of human-to-human interaction. I think the answer is “probably, somewhat”. We certainly have some evidence that physicians can increase placebo effects through explicit priming; it would seem logical that some of that work could be done by an iPad. Also, the part of the placebo effect that is patient-driven -- fed by their preexisting hopes and expectations – would seem to be transferrable to a non-personal interaction (after all, patients already derive placebic benefit from homeopathic and other ineffective over-the-counter cures with no physician, and minimal human, input).

The bottom line, I think, is this: we oversimplify the situation when we talk about “the” placebo effect. Placebo response in patients is a complex cluster of mechanisms, some or all of which are at play in each individual reaction. On the patient’s side, subconscious hope, conscious expectations, and learned associations are all in play, and may work with or against each other. The physician’s beliefs, transmitted through overt priming or subtle signals, can also work for or against the total placebo effect. There is even good evidence that placebo analgesia is produced through multiple distinct biochemical pathways, so proposing a single simple model to cover all placebo responses will be doomed to failure.

The consequence for clinical trialists? I do not think we need to start fretting over subliminal cues and secret subconscious signaling, but we do need to develop a more comprehensive method of measuring the impact of multiple environmental and patient factors in predicting response. The best way to accomplish this may be to implement prospective studies in parallel with existing treatment trials to get a clearer real-world picture of placebo response in action.

[Image: "Extraction of the Stone of Folly", Hieronymus Bosch, by way of Wikimedia Commons] Karin B. Jensen, Ted J. Kaptchuk, Irving Kirsch, Jacqueline Raicek, Kara M. Lindstrom, Chantal Berna, Randy L. Gollub, Martin Ingvar, & Jian Kong (2012). Nonconscious activation of placebo and nocebo pain responses PNAS DOI: 10.1073/pnas.1202056109

Friday, July 6, 2012

A placebo control is not a placebo effect

Following up on yesterday's post regarding a study of placebo-related information, it seems worthwhile to pause and expand on the difference between placebo controls and placebo effects.

The very first sentence of the study paper reflects a common, and rather muddled, belief about placebo-controlled trials:
Placebo groups are used in trials to control for placebo effects, i.e. those changes in a person's health status that result from the meaning and hope the person attributes to a procedure or event in a health care setting.
The best I can say about the above sentence is that in some (not all) trials, this accounts for some (not all) of the rationale for including a placebo group in the study design. 

There is no evidence that “meaning and hope” have any impact on HbA1C levels in patients with diabetes. The placebo effect only goes so far, and certainly doesn’t have much sway over most lab tests.  And yet we still conduct placebo-controlled trials in diabetes, and rightly so. 

To clarify, it may be helpful to break this into two parts:
  1. Most trials need a “No Treatment” arm. 
  2. Most “No Treatment” arms should be double-blind, which requires use of a placebo.
Let’s take these in order.

We need a “No Treatment” arm:
  • Where the natural progression of the disease is variable (e.g., many psychological disorders, such as depression, have ups and downs that are unrelated to treatment).  This is important if we want to measure the proportion of responders – for example, what percentage of diabetes patients got their HbA1C levels below 6.5% on a particular regimen.  We know that some patients will hit that target even without additional intervention, but we won’t know how many unless we include a control group.
  • Where the disease is self-limiting.  Given time, many conditions – the flu, allergies, etc. – tend to go away on their own.  Therefore, even an ineffective medication will look like it’s doing something if we simply test it on its own.  We need a control group to measure whether the investigational medication is actually speeding up the time to cure.
  • When we are testing the combination of an investigational medication with one or more existing therapies. We have a general sense of how well metformin will work in T2D patients, but the effect will vary from trial to trial.  So if I want to see how well my experimental therapy works when added to metformin, I’ll need a metformin-plus-placebo control arm to be able to measure the additional benefit, if any.

All of the above are especially important when the trial is selecting a group of patients with greater disease severity than average.  The process of “enriching” a trial by excluding patients with mild disease has the benefit of requiring many fewer enrolled patients to demonstrate a clinical effect.  However, it also will have a stronger tendency to exhibit “regression to the mean” for a number of patients, who will exhibit a greater than average improvement during the course of the trial.  A control group accurately measures this regression and helps us measure the true effect size.

So, why include a placebo?  Why not just have a control group of patients receiving no additional treatment?  There are compelling reasons:
  • To minimize bias in investigator assessments.  We most often think about placebo arms in relation to patient expectations, but often they are even more valuable in improving the accuracy of physician assessments.  Like all humans, physician investigators interpret evidence in light of their beliefs, and there is substantial evidence that unblinded assessments exaggerate treatment effects – we need the placebo to help maintain investigator blinding.
  • To improve patient compliance in the control arm.  If a patient is clearly not receiving an active treatment, it is often very difficult to keep him or her interested and engaged with the trial, especially if the trial requires frequent clinic visits and non-standard procedures (such as blood draws).  Retention in no-treatment trials can be much lower than in placebo-controlled trials, and if it drops low enough, the validity of any results can be thrown into question.
  • To accurately gauge adverse events.  Any problem(s) encountered are much more likely to be taken seriously – by both the patient and the investigator – if there is genuine uncertainty about whether the patient is on active treatment.  This leads to much more accurate and reliable reporting of adverse events.
In other words, even if the placebo effect didn’t exist, it would still be necessary and proper to conduct placebo-controlled trials.  The failure to separate “placebo control” from “placebo effect” yields some very muddled thinking (which was the ultimate point of my post yesterday).

Thursday, July 5, 2012

The Placebo Effect (No Placebo Necessary)

4 out of 5 non-doctors recommend starting
with "regular strength", and titrating up from there...
(Photo from
The modern clinical trial’s Informed Consent Form (ICF) is a daunting document.  It is packed with a mind-numbing litany of procedures, potential risks, possible adverse events, and substantial additional information – in general, if someone, somewhere, might find a fact relevant, then it gets into the form.  A run-of-the-mill ICF in a phase 2 or 3 pharma trial can easily run over 10 pages of densely worded text.  You might argue (and in fact, a number of people have, persuasively) that this sort of information overload reduces, rather than enhances, patient understanding of clinical trials.

So it is a bit of a surprise to read a paper arguing that patient information needs to be expanded because it does not contain enough information.  And it is yet even more surprising to read about what’s allegedly missing: more information about the potential effects of placebo.

Actually, “surprising” doesn’t really begin to cover it.  Reading through the paper is a borderline surreal experience.  The authors’ conclusions from “quantitative analysis”* of 45 Patient Information Leaflets for UK trials include such findings as
  • The investigational medication is mentioned more often than the placebo
  • The written purpose of the trial “rarely referred to the placebo”
  • “The possibility of continuing on the placebo treatment after the trial was never raised explicitly”
(You may need to give that last one a minute to sink in.)

Rather than seeing these as rather obvious conclusions, the authors recast them as ethical problems to be overcome.  From the article:
Information leaflets provide participants with a permanent written record about a clinical trial and its procedures and thus make an important contribution to the process of informing participants about placebos.
And from the PR materials furnished along with publication:
We believe the health changes associated with placebos should be better represented in the literature given to patients before they take part in a clinical trial.
There are two points that I think are important here – points that are sometimes missed, and very often badly blurred, even within the research community:

1.    The placebo effect is not caused by placebos.  There is nothing special about a “placebo” treatment that induces a unique effect.  The placebo effect can be induced by a lot of things, including active medications.  When we start talking about placebos as causal agents, we are engaging in fuzzy reasoning – placebo effects will not only be seen in the placebo arm, but will be evenly distributed among all trial participants.

2.    Changes in the placebo arm cannot be assumed to be caused by the placebo effect.  There are many reasons why we may observe health changes within a placebo group, and most of them have nothing to do with the “psychological and neurological mechanisms” of the placebo effect.  Giving trial participant information about the placebo effect may in fact be providing them with an entirely inaccurate description of what is going on. Bishop FL, Adams AEM, Kaptchuk TJ, Lewith GT (2012). Informed Consent and Placebo Effects: A Content Analysis of Information Leaflets to Identify What Clinical Trial Participants Are Told about Placebos. PLoS ONE DOI: 10.1371/journal.pone.0039661  

(* Not related to the point at hand, but I would applaud efforts to establish some lower boundaries to what we are permitted to call "quantitative analysis".  Putting counts from 45 brochures into an Excel spreadsheet should fall well below any reasonable threshold.)