Showing posts with label protocol design. Show all posts
Showing posts with label protocol design. Show all posts

Thursday, January 2, 2014

The Coming of the MOOCT?

Big online studies, in search of millions of participants.

Back in September, I enrolled in the Heath eHeart Study - an entirely online research study tracking cardiac health. (Think Framingham Heart, cast wider and shallower - less intensive follow-up, but spread out to the entire country.)


[In the spirit of full disclosure, I should note that I haven’t completed any follow-up activities on the Heath eHeart website yet. Yes, I am officially part of the research adherence problem…]


Yesterday, I learned of the Quantified Diet Project, an entirely online/mobile app-supported randomized trial of 10 different weight loss regimens. The intervention is short - only 4 weeks - but that’s probably substantially longer than most New Year diets manage to last, and should be just long enough to detect some early differences among the approaches.


I have been excited about the potential for online medical research for quite some time. For me, the real beginning was when PatientsLikeMe published the results of their online lithium for ALS research study - as I wrote at the time, I have never been so enthused about a negative trial before or since.



That was two and a half years ago, and there hasn't been a ton of activity since then outside of PatientsLikeMe (who have expanded and formalized their activities in the Open Research Exchange). So I’m eager to hear how these two new studies go. There are some interesting similarities and differences:


  • Both are university/private collaborations, and both (perhaps unsurprisingly) are rooted in California: Heath eHeart is jointly run by UCSF and the American Heart Association, while Quantified Diet is run by app developer Lift with scientific support from a (unidentified?) team at Berkeley.
  • Both are pushing for a million or more participants, dwarfing even very large traditional studies by orders of magnitude.
  • Health eHeart is entirely observational, and researchers will have the ability to request its data to test their own hypotheses, whereas Quantified Diet is a controlled, randomized trial.


Data entry screen on Health eHeart
I really like the user interface for Heath eHeart - it’s extremely simple, with a logical flow to the sections. It clearly appears to be designed for older participants, and the extensive data intake is subdivided into a large number of subsections, each of which can typically be completed in 2-4 minutes.



I have not enrolled into the Quantified Diet, but it appears to have a strong social media presence. You can follow the Twitter conversation through the #quantdiet hashtag. The semantic web and linked data guru Kerstin Forsberg has already posted about joining, and I hope to hear more from her and from clinical trial social media expert Rahlyn Gossen, who’s also joined.


To me, probably the most intriguing technical feature of the QuantDiet study is its “voluntary randomization” design. Participants can self-select into the diet of their choice, or can choose to be randomly assigned by the application. It will be interesting to see whether any differences emerge between the participants who chose a particular arm and those who were randomized into that arm - how much does a person’s preference matter?


In an earlier tweet I asked, “is this a MOOCT?” - short for Massive Open Online Clinical Trial. I don’t know if that’s the best name for it, and l’d love to hear other suggestions. By any other name, however, these are still great initiatives and I look forward to seeing them thrive in the coming years.

The implications for pharmaceutical and medical device companies is still unclear. Pfizer's jump into world of "virtual trials" was a major bust, and widely second-guessed. I believe there is definitely a role and a path forward here, and these big efforts may teach us a lot about how patients want to be engaged online.

Tuesday, September 18, 2012

Delivering the Placebic Payload


Two recent articles on placebo effects caught my attention. Although they come to the topic from very different angles, they both bear on the psychological mechanisms by which the placebo effect delivers its therapeutic payload, so it seems worthwhile to look at them together.
Placebo delivery: there's got to be a better way!

The first item is a write up of 2 small studies, Nonconscious activation of placebo and nocebo pain responses. (The article is behind a paywall at PNAS: if you can’t access it you can read this nice synopsis on Inkfish, or the press release issued by Beth Israel Deaconess (which includes bonus overhyping of the study’s impact by the authors).)

The studies’ premises were pretty straightforward: placebo effects are (at least in part) caused by conditioned responses. In addition, psychologists have demonstrated in a number of studies that many types of conditioned responses can be triggered subliminally.  Therefore, it might be possible, under certain circumstances, to elicit placebo/nocebo responses with nothing but subliminal stimuli.

And that, in effect, is what the studies demonstrate.  The first showed a placebo effect in patients who had been trained to associate various pain levels with pictures of specific faces. The second study elicited a (somewhat attenuated) placebo response even when those pictures were shown for a mere 12 milliseconds – below the threshold of conscious recognition. This gives us some preliminary evidence that placebo effects can be triggered through entirely subconscious mental processes.

Or does it? There seems to me to be some serious difficulties in making the leap from this highly-controlled lab experiment to the actual workings of placebos in clinical practice. First and foremost: to elicit subconscious effects, these experiments had to first provide quite a significant “pretreatment” of conscious, unambiguous conditioning to associate certain pain levels with specific images: 50 pain jolts in about 15 minutes.  Even then, the experimenters still felt the need to re-apply the explicit conditioning in 10% of the test cases, “to prevent extinction”.  This raises the obvious question: if even an intensive, explicit conditioning sequence can wear off that quickly, how are we to believe that a similar mechanism is acting in everyday clinical encounters, which are not so frequent and so explicit? The authors don’t seem to see an issue here, as they write:
Our results thereby translate the investigation of nonconscious effects to the clinical realm, by suggesting that health-related responses can be triggered by cues that are not consciously perceived, not only for pain … but also for other medical problems with demonstrated placebo effects, e.g., asthma, depression, and irritable bowel syndrome. Understanding the role of nonconscious processes in placebo/nocebo opens unique possibilities of enhancing clinical care by attending to the impact of nonconscious cues conveyed during the therapeutic encounter and improving therapeutic decisions.
So, the clinical relevance for these findings depends on how much you believe that precisely repeated blasts of pain faithfully replicate the effects of physician/patient interactions. I do not think I am being terribly skeptical when I say that I think clinical interactions are usually shorter and involve a lot more ambiguity – I am not even sure that this is a good model for placebo analgesia, and it certainly can’t be considered to have an lot of explanatory explanations for placebo effects in, eg, depression trials.

…Which brings me to the second article, a very different creature altogether.  It’s a blog post by Dike Drummond entitled Can digital medicine have a placebo effect? He actually comes very close to the study authors’ position in terms of ascribing placebo effects to subconscious processes:
The healing can occur without outside assistance — as the placebo effect in drug studies shows — or it can augment whatever medication or procedure you might also prescribe.  I believe it is the human qualities of attention and caring that trigger the placebo effect. These exist parallel to the provider’s ability to diagnose and select an appropriate medical treatment.
You can arrive at the correct diagnosis and treatment and not trigger a placebo effect. You can fail to make eye contact, write out a prescription, hand it to the patient and walk out the door.  Right answer — no placebo effect.  Your skills as a placebologist rely on the ability to create the expectation of healing in the patient. This is most definitely part of the art of medicine.
I will disagree a bit with Drummond on one point: if we could extinguish placebo effects merely by avoiding eye contact, or engaging in similar unsociable behavior, then we would see greatly reduced placebo effects in most clinical trials, since most sponsors do try to implement strategies to reduce those effects. In fact, there is some evidence that placebo effects are increasing in some trials. (Which, tangentially, makes me ask why pharmaceutical companies keep paying “expert consultants” to conduct training seminars on how to eliminate placebo effects … but that’s a rant for another day.)

Drummond ponders whether new technologies will be able to elicit placebo responses in patients, even in the complete absence of human-to-human interaction. I think the answer is “probably, somewhat”. We certainly have some evidence that physicians can increase placebo effects through explicit priming; it would seem logical that some of that work could be done by an iPad. Also, the part of the placebo effect that is patient-driven -- fed by their preexisting hopes and expectations – would seem to be transferrable to a non-personal interaction (after all, patients already derive placebic benefit from homeopathic and other ineffective over-the-counter cures with no physician, and minimal human, input).

The bottom line, I think, is this: we oversimplify the situation when we talk about “the” placebo effect. Placebo response in patients is a complex cluster of mechanisms, some or all of which are at play in each individual reaction. On the patient’s side, subconscious hope, conscious expectations, and learned associations are all in play, and may work with or against each other. The physician’s beliefs, transmitted through overt priming or subtle signals, can also work for or against the total placebo effect. There is even good evidence that placebo analgesia is produced through multiple distinct biochemical pathways, so proposing a single simple model to cover all placebo responses will be doomed to failure.

The consequence for clinical trialists? I do not think we need to start fretting over subliminal cues and secret subconscious signaling, but we do need to develop a more comprehensive method of measuring the impact of multiple environmental and patient factors in predicting response. The best way to accomplish this may be to implement prospective studies in parallel with existing treatment trials to get a clearer real-world picture of placebo response in action.

[Image: "Extraction of the Stone of Folly", Hieronymus Bosch, by way of Wikimedia Commons]

ResearchBlogging.org Karin B. Jensen, Ted J. Kaptchuk, Irving Kirsch, Jacqueline Raicek, Kara M. Lindstrom, Chantal Berna, Randy L. Gollub, Martin Ingvar, & Jian Kong (2012). Nonconscious activation of placebo and nocebo pain responses PNAS DOI: 10.1073/pnas.1202056109

Monday, August 13, 2012

Most* Clinical Trials Are Too** Small

* for some value of "most"
** for some value of "too"


[Note: this is a companion to a previous post, Clouding the Debate on Clinical Trials: Pediatric Edition.]

Are many current clinical trials underpowered? That is, will they not enroll enough patients to adequately answer the research question they were designed to answer? Are we wasting time and money – and even worse, the time and effort of researchers and patient-volunteers – by conducting research that is essentially doomed to produce clinically useless results?

That is the alarming upshot of the coverage on a recent study published in the Journal of the American Medical Association. This Duke Medicine News article was the most damning in its denunciation of the current state of clinical research:
Duke: Mega-Trial experts concerned
that not enough trials are mega-trials
Large-Scale Analysis Finds Majority of Clinical Trials Don't Provide Meaningful Evidence

The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making.
The study was also was also covered in many industry publications, as well as the mainstream news. Those stories were less sweeping in their indictment of the "clinical trial enterprise", but carried the same main theme: that an "analysis" had determined that most current clinical trial were "too small".

I have only one quibble with this coverage: the study in question didn’t demonstrate any of these points. At all.

The study is a simple listing of gross characteristics of interventional trials registered over a 6 year period. It is entirely descriptive, and limits itself entirely to data entered by the trial sponsor as part of the registration on ClinicalTrials.gov. It contains no information on the quality of the trials themselves.

That last part can’t be emphasized enough: the study contains no quality benchmarks. No analysis of trial design. No benchmarking of the completeness or accuracy of the data collected. No assessment of the clinical utility of the evidence produced. Nothing like that at all.

So, the question that nags at me is: how did we get from A to B? How did this mildly-interesting-and-entirely-descriptive data listing transform into a wholesale (and entirely inaccurate) denunciation of clinical research?

For starters, the JAMA authors divide registered trials into 3 enrollment groups: 1-100, 101-1000, and >1000. I suppose this is fine, although it should be noted that it is entirely arbitrary – there is no particular reason to divide things up this way, except perhaps a fondness for neat round numbers.

Trials within the first group are then labeled "small". No effort is made to explain why 100 patients represents a clinically important break point, but the authors feel confident to conclude that clinical research is "dominated by small clinical trials", because 62% of registered trials fit into this newly-invented category. From there, all you need is a completely vague yet ominous quote from the lead author. As US News put it:
The new report says 62 percent of the trials from 2007-2010 were small, with 100 or fewer participants. Only 4 percent had more than 1,000 participants.

"There are 330 new clinical trials being registered every week, and a number of them are very small and probably not as high quality as they could be," [lead author Dr Robert] Califf said.
"Probably not as high quality as they could be", while just vague enough to be unfalsifiable, is also not at all a consequence of the data as reported. So, through a chain of arbitrary decisions and innuendo, "less than 100" becomes "small" becomes "too small" becomes "of low quality".

Califf’s institution, Duke, appears to be particularly guilty of driving this evidence-free overinterpretation of the data, as seen in the sensationalistic headline and lede quoted above. However, it’s clear that Califf himself is blurring the distinction between what his study showed and what it didn’t:
"Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others," says Califf. "For example, 96 percent of clinical trials have ≤1000 participants, and 62 percent have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors, and consumers about the choices they must make to prevent and treat disease."
Maybe he’s right that these small studies will not be able to inform patients and doctors, but his study has provided absolutely no support for that statement.

When we build a protocol, there are actually only 3 major factors that go into determining how many patients we want to enroll:
  1. How big a difference we estimate the intervention will have compared to a control (the effect size)
  2. How much risk we’ll accept that we’ll get a false-positive (alpha) or false-negative (beta) result
  3. Occasionally, whether we need to add participants to better characterize safety and tolerability (as is frequently, and quite reasonably, requested by FDA and other regulators)
Quantity is not quality: enrolling too many participants in an investigational trial is unethical and a waste of resources. If the numbers determine that we should randomize 80 patients, it would make absolutely no sense to randomize 21 more so that the trial is no longer "too small". Those 21 participants could be enrolled in another trial, to answer another worthwhile question.

So the answer to "how big should a trial be?" is "exactly as big as it needs to be." Taking descriptive statistics and applying normative categories to them is unhelpful, and does not make for better research policy.


ResearchBlogging.org Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, & Tasneem A (2012). Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010. JAMA : the journal of the American Medical Association, 307 (17), 1838-47 PMID: 22550198

Wednesday, August 8, 2012

Testing Transparency with the TEST Act

A quick update on my last post regarding the enormously controversial -- but completely unmentioned -- requirement to publicly report all versions of clinical trial protocols on ClinicalTrials.gov: The New England Journal of Medicine has weighed in with an editorial strongly in support of the TEST Act. 

NEJM Editor-in-Chief Jeffrey Drazen at least mentions the supporting documents requirement, but only in part of one sentence, where he confusingly refers to the act "extending results reporting to include the deposition of consent and protocol documents approved by institutional review boards." The word "deposition" does not suggest actual publication, which the act clearly requires. 

I don't think this qualifies as an improvement in transparency about the impact the TEST Act, as written, would have. I'm not surprised when a trade publication like Center Watch recycles a press release into a news item. However, it wouldn't seem like too much to ask that NEJM editorials aspire to a moderately higher standard of critical inquiry.

Monday, August 6, 2012

Public Protocols? Burying the lede on the TEST Act

Not to be confused with the Test Act.
(via Luminarium)
4 Democratic members of Congress recently co-sponsored the TEST (Trial and Experimental Studies Transparency) Act, which is intended to expand the scope of mandatory registration of clinical trials. Coverage so far has been light, and mainly consists of uncritical recycling of the press release put out by congressman Markey’s office.

Which is unfortunate, because nowhere in that release is there a single mention of the bill’s most controversial feature: publication of clinical trial "supporting documents", including the patient’s Informed Consent Form (ICF) and, incredibly, the entire protocol (including any and all subsequent amendments to the protocol).

How Rep. Markey and colleagues managed to put out a 1,000-word press release without mentioning this detail is nothing short of remarkable. Is the intent to try to sneak this through?

Full public posting of every clinical trial protocol would represent an enormous shift in how R&D is conducted in this country (and, therefore, in the entire world). It would radically alter the dynamics of how pharmaceutical companies operate by ripping out a giant chunk of every company’s proprietary investment – essentially, confiscating and nationalizing their intellectual property. 

Maybe, ultimately, that would be a good thing.  But that’s by no means clear ... and quite likely not true. Either way, however, this is not the kind of thing you bury in legislation and hope no one notices.

[Full text of the bill is here (PDF).]

[UPDATE May 17, 2013: Apparently, the irony of not being transparent with the contents of your transparency law was just too delicious to pass up, as Markey and his co-sponsors reintroduced the bill yesterday. Once again, the updated press release makes no mention of the protocol requirement.]