Showing posts with label risks of clinical trials. Show all posts
Showing posts with label risks of clinical trials. Show all posts

Wednesday, May 15, 2013

Placebos: Banned in Helsinki?


One of the unintended consequences of my (admittedly, somewhat impulsive) decision to name this blog is that I get a fair bit of traffic from Google: people searching for placebo-related information.

Some recent searches have been about the proposed new revisions to the Declaration of Helsinki, and how the new draft version will prohibit or restrict the use of placebo controls in clinical trials. This was a bit puzzling, given that the publicly-released draft revisions [PDF] didn't appear to substantially change the DoH's placebo section.

Much of the confusion appears to be caused by a couple sources. First, the popular Pharmalot blog (whose approach to critical analysis I've noted before as being ... well ... occasionally unenthusiastic) covered it thus:
The draft, which was released earlier this week, is designed to update a version that was adopted in 2008 and many of the changes focus on the use of placebos. For instance, placebos are only permitted when no proven intervention exists; patients will not be subject to any risk or there must be ‘compelling and sound methodological reasons’ for using a placebo or less effective treatment.
This isn't a good summary of the changes, since the “for instance” items are for the most part slight re-wordings from the 2008 version, which itself didn't change much from the version adopted in 2000.

To see what I mean, take a look at the change-tracked version of the placebo section:
The benefits, risks, burdens and effectiveness of a new intervention must be tested against those of the best current proven intervention(s), except in the following circumstances: 
The use of placebo, or no treatment intervention is acceptable in studies where no current proven intervention exists; or 
Where for compelling and scientifically sound methodological reasons the use of any intervention less effective than the best proven one, placebo or no treatment is necessary to determine the efficacy or safety of an intervention 
and the patients who receive any intervention less effective than the best proven one, placebo or no treatment will not be subject to any additional risks of serious or irreversible harm as a result of not receiving the best proven intervention 
Extreme care must be taken to avoid abuse of this option.
Really, there is only one significant change to this section: the strengthening of the existing reference to “best proven intervention” in the first sentence. It was already there, but has now been added to sentences 3 and 4. This is a reference to the use of active (non-placebo) comparators that are not the “best proven” intervention.

So, ironically, the biggest change to the placebo section is not about placebos at all.

This is a bit unfortunate, because to me it subtracts from the overall clarity of the section, since it's no longer exclusively about placebo despite still being titled “Use of Placebo”. The DoH has been consistently criticized during previous rounds of revision for becoming progressively less organized and coherently structured, and it certainly reads like a rambling list of semi-related thoughts – a classic “document by committee”. This lack of structure and clarity certainly hurt the DoH's effectiveness in shaping the world's approach to ethical clinical research.

Even worse, the revisions continue to leave unresolved the very real divisions that exist in ethical beliefs about placebo use in trials. The really dramatic revision to the placebo section happened over a decade ago, with the 2000 revision. Those changes, which introduced much of the strict wording in the current version, were extremely controversial, and resulted in the issuance of an extraordinary “Note of Clarification” that effectively softened the new and inflexible language. The 2008 version absorbed the wording from the Note of Clarification, and the resulting document is now vague enough that it is interpreted quite differently in different countries. (For more on the revision history and controversy, see this comprehensive review.)

The 2013 revision could have been an opportunity to try again to build a consensus around placebo use. At the very least, it could have acknowledged and clarified the division of beliefs on the topic. Instead, it sticks to its ambiguous phrasing which will continue to support multiple conflicting interpretations. This does not serve the ends of assuring the ethical conduct of clinical trials.

Ezekiel Emmanuel has been a long-time critic of the DoH's lack of clarity and structure. Earlier this month, he published a compact but forceful review of the ways in which the Declaration has become weakened by its long series of revisions:
Over the years problems with, and objections to, the document have accumulated. I propose that there are nine distinct problems with the current version of the Declaration of Helsinki: it has an incoherent structure; it confuses medical care and research; it addresses the wrong audience; it makes extraneous ethical provisions; it includes contradictions; it contains unnecessary repetitions; it uses multiple and poor phrasings; it includes excessive details; and it makes unjustified, unethical recommendations.
Importantly, Emmanuel also includes a proposed revision and restructuring of the DoH. In his version, much of the current wording around placebo use is retained, but it is absorbed into the larger concept of “Scientific Validity”, which adds important context to the decision about how to decide on a comparator arm in general.

Here is Emmanuel’s suggested revision:
Scientific Validity:  Research in biomedical and other sciences involving human participants must conform to generally accepted scientific principles, be based on a thorough knowledge of the scientific literature, other relevant sources of information, and suitable laboratory, and as necessary, animal experimentation.  Research must be conducted in a manner that will produce reliable and valid data.  To produce meaningful and valid data new interventions should be tested against the best current proven intervention. Sometimes it will be appropriate to test new interventions against placebo, or no treatment, when there is no current proven intervention or, where for compelling and scientifically sound methodological reasons the use of placebo is necessary to determine the efficacy and/or safety of an intervention and the patients who receive placebo, or no treatment, will not be subject to excessive risk or serious irreversible harm.  This option should not be abused.
Here, the scientific rationale for the use of placebo is placed in the greater context of selecting a control arm, which is itself subservient to the ethical imperative to only conduct studies that are scientifically valid. One can quibble with the wording (I still have issues with the use of “best proven” interventions, which I think is much too undefined here, as it is in the DoH, and glosses over some significant problems), but structurally this is a lot stronger, and provides firmer grounding for ethical decision making.

ResearchBlogging.org Emanuel, E. (2013). Reconsidering the Declaration of Helsinki The Lancet, 381 (9877), 1532-1533 DOI: 10.1016/S0140-6736(13)60970-8






[Image: Extra-strength chill pill, modified by the author, based on an original image by Flikr user mirjoran.]

Friday, September 14, 2012

Clinical trials: recent reading recommendations

My recommended reading list -- highlights from the past week:


Absolute required reading for anyone who designs protocols or is engaged in recruiting patients into clinical trials: Susan Guber writes eloquently about her experiences as a participant in cancer clinical trials.
New York Times Well Blog: The Trials of Cancer Trials
Today's #FDAFridayPhoto features Harvey
Wiley, leader of the famed FDA "Poison Squad".

The popular press in India continues to be disingenuous and exploitative in its coverage of clinical trial deaths in that country. (My previous thoughts on that are here.) Kiran Mazumdar-Shaw, an industry leader, has put together an intelligent and articulate antidote.
The Economic Times: Need a rational view on clinical trials


Rahlen Gossen exhibits mastery of the understatement: “Though the Facebook Insights dashboard is a great place to start, it has a few significant disadvantages.” She also provides a good overview of the most common pitfalls you’ll encounter when you try to get good metrics out of your Facebook campaign. 


I have not had a chance to watch it yet, but I’m excited to see that theHeart.org has just posted a 7-part video editorial series by Yale’s Harlan Krumholz and Duke Stanford’s Bob Harrington on “a frank discussion on the controversies in the world of clinical trials”. 

Monday, August 27, 2012

"Guinea Pigs" on CBS is Going to be Super Great, I Can Just Tell


An open letter to Mad Men producer/writer Dahvi Waller

Dear Dahvi,

I just wanted to drop you a quick note of congratulations when I heard through the grapevine that CBS has signed you on to do a pilot episode of your new medical drama, Guinea Pigs (well actually, I heard it from the Hollywood Reporter; the grapevine doesn’t tell me squat). According to the news item,
The drama centers on group of trailblazing doctors who run clinical trials at a hospital in Philadelphia. The twist: The trials are risky, and the guinea pigs are human.
Probably just like this, but
with a bigger body count.
(Sidenote: that’s quite the twist there! For a minute, I thought this was going to be the first ever rodent-based prime time series!)

I don’t want to take up too much of your time. I’m sure you’re extremely busy with lots of critical casting decisions, like: will the Evil Big Pharma character be a blonde, beautiful-but-treacherous Ice Queen type in her early 30’s, or an expensively-suited, handsome-but-treacherous Gordon Gekko type in his early 60’s? (My advice: Don’t settle!  Use both! Viewers of all ages can love to hate the pharmaceutical industry!)

About that name, by the way: great choice! I’m really glad you didn’t overthink that one. A good writer should go with her gut and pick the first easy stereotype that pops into her head. (Because the head is never closer to the gut then when it’s jammed firmly up … but I don’t have to explain anatomy to you! You write a medical drama for television!)

I’m sure the couple-three million Americans who enroll in clinical trials each year will totally relate to your calling them guinea pigs. In our industry, we call them heroes, but that’s just corny, right? Real heroes on TV are people with magic powers, not people who contribute to the advancement of medicine.

Anyway, I’m just really excited because our industry is just so, well … boring! We’re so fixated on data collection regulations and safety monitoring and ethics committee reviews and yada yada yada – ugh! Did you know we waste 5 to 10 years on this stuff, painstakingly bringing drugs through multiple graduated phases of testing in order to produce a mountain of data (sometimes running over 100,000 pages long) for the FDA to review?

Dahvi Waller: bringing CSI
to clinical research
I’m sure you’ll be giving us the full CSI-meets-Constant-Gardener treatment, though, and it will all seem so incredibly easy that your viewers will wonder what the hell is taking us so long to make these great new medicines. (Good mid-season plot point: we have the cure for most diseases already, but they’ve been suppressed by a massive conspiracy of sleazy corporations, corrupt politicians, and inept bureaucrats!)

Anyway, best of luck to you! I can't wait to see how accurately and respectfully you treat the work of the research biologists and chemists, physician investigators, nurses, study coordinators, monitors, reviewers, auditors, and patient volunteers guinea pigs who are working hard to ensure the next generation of medicines are safe and effective.  What can go wrong? It's television!




Tuesday, July 31, 2012

Clouding the Debate on Clinical Trials: Pediatric Edition

I would like to propose a rule for clinical trial benchmarks. This rule may appear so blindingly obvious that I run the risk of seeming simple-minded and naïve for even bringing it up.

The rule is this: if you’re going to introduce a benchmark for clinical trial design or conduct, explain its value.

Are we not putting enough resources into pediatric research, or have we over-incentivized risky experimentation on a vulnerable population?  This is a critically important question in desperate need of more data and thoughtful analysis.
That’s it.  Just a paragraph explaining the rationale of why you’ve chosen to measure what you’re measuring.  Extra credit if you compare it to other benchmarks you could have used, or consider the limitations of your new metric.

I would feel bad for bringing this up, were it not for two recent articles in major publications that completely fail to live up to this standard. I’ll cover one today and one tomorrow.

The first is a recent article in Pediatrics, Pediatric Versus Adult Drug Trials for Conditions With High Pediatric Disease Burden, which has received a fair bit of attention in the industry -- mostly due to Reuters uncritically recycling the authors’ press release

It’s worth noting that the claim made in the release title, "Drug safety and efficacy in children is rarely addressed in drug trials for major diseases", is not at all supported by any data in the study itself. However, I suppose I can live with misleading PR.  What is frustrating is the inadequacy of the measures the authors use in the actual study, and the complete lack of discussion about them.

To benchmark where pediatric drug research should be, they use the proportion of total "burden of disease" borne by children.   Using WHO estimates, they look at the ratio of burden (measured, essentially, in years of total disability) between children and adults.  This burden is further divided into high-income countries and low/middle-income countries.

This has some surface plausibility, but presents a host of issues.  Simply looking at the relative prevalence of a condition does not really give us any insights into what we need to study about treatment.  For example: number 2 on the list for middle/low income diseases is diarrheal illness, where WHO lists the burden of disease as 90% pediatric.  There is no question that diarrheal diseases take a terrible toll on children in developing countries.  We absolutely need to focus resources on improving prevention and treatment: what we do not particularly need is more clinical trials.  As the very first bullet on the WHO fact sheet points out, diarrheal diseases are preventable and treatable.  Prevention is mostly about improving the quality of water and food supplies – this is vitally important stuff, but it has nothing to do with pharmaceutical R&D.

In the US, the NIH’s National Institute for Child Health and Human Development (NICHD) has a rigorous process for identifying and prioritizing needs for pediatric drug development, as mandated by the BPCA.  It is worth noting that only 2 of the top 5 diseases in the Pediatrics article make the cut among the 41 highest-priority areas in the NICHD’s list for 2011.

(I don’t even think the numbers as calculated by the authors are even convincing on their own terms:  3 of the 5 "high burden" diseases in wealthy countries – bipolar, depression, and schizophrenia – are extremely rare in very young children, and only make this list because of their increasing incidence in adolescence.  If our objective is to focus on how these drugs may work differently in developing children, then why wouldn’t we put greater emphasis on the youngest cohorts?)

Of course, just because a new benchmark is at odds with other benchmarks doesn’t necessarily mean that it’s wrong.  But it does mean that the benchmark requires some rigorous vetting before its used.  The authors make no attempt at explaining why we should use their metric, except to say it’s "apt". The only support provided is a pair of footnotes – one of those, ironically, is to this article from 1999 that contains a direct warning against their approach:
Our data demonstrate how policy makers could be misled by using a single measure of the burden of disease, because the ranking of diseases according to their burden varies with the different measures used.
If we’re going to make any progress in solving the problems in drug development – and I think we have a number of problems that need solving – we have got to start raising our standards for our own metrics.

Are we not putting enough resources into pediatric research, or have we over-incentivized risky experimentation on a vulnerable population? This is a critically important question in desperate need of more data and thoughtful analysis. Unfortunately, this study adds more noise than insight to the debate.

Tomorrow In a couple weeks, I’ll cover the allegations about too many trials being too small. [Update: "tomorrow" took a little longer than expected. Follow up post is here.]

[Note: the Pediatrics article also uses another metric, "Percentage of Trials that Are Pediatric", that is used as a proxy for amount of research effort being done.  For space reasons, I’m not going to go into that one, but it’s every bit as unhelpful as the pediatric burden metric.]

ResearchBlogging.org Bourgeois FT, Murthy S, Pinto C, Olson KL, Ioannidis JP, & Mandl KD (2012). Pediatric Versus Adult Drug Trials for Conditions With High Pediatric Disease Burden. Pediatrics PMID: 22826574

Tuesday, July 24, 2012

How Not to Report Clinical Trial Data: a Clear Example

I know it’s not even August yet, but I think we can close the nominations for "Worst Trial Metric of the Year".  The hands-down winner is Pharmalot, for the thoughtless publication of this article reviewing "Deaths During Clinical Trials" per year in India.  We’ll call it the Pharmalot Death Count, or PDC, and its easy to explain – it's just the total number of patients who died while enrolled in any clinical trial, regardless of cause, and reported as though it were an actual meaningful number.

(To make this even more execrable, Pharmalot actually calls this "Deaths attributed to clinical trials" in his opening sentence, although the actual data has exactly nothing to do with the attribution of the death.)

In fairness, Pharmalot is really only sharing the honors with a group of sensationalistic journalists in India who have jumped on these numbers.  But it has a much wider readership within the research community, and could have at least attempted to critically assess the data before repeating it (along with criticism from "experts").

The number of things wrong with this metric is a bit overwhelming.  I’m not even sure where to start.  Some of the obvious issues here:

1. No separation of trial-related versus non-trial-related.  Some effort is made to explain that there may be difficulty in determining whether a particular death was related to the study drug or not.  However, that obscures the fact that the PDC lumps together all deaths, whether they took an experimental medication or not. That means the PDC includes:
  • Patients in control arms receiving standard of care and/or placebo, who died during the course of their trial.
  • Patients whose deaths were entirely unrelated to their illness (eg, automobile accident victims)
2. No base rates.  When a raw death total is presented, a number of obvious questions should come to mind:  how many patients were in the trials?  How many deaths were there in patients with similar diseases who were not in trials?  The PDC doesn’t care about that kind of context

3. No sensitivity to trial design.  Many late-stage cancer clinical trials use Overall Survival (OS) as their primary endpoint – patients are literally in the trial until they die.  This isn’t considered unethical; it’s considered the gold standard of evidence in oncology.  If we ran shorter, less thorough trials, we could greatly reduce the PDC – would that be good for anyone?

Case Study: Zelboraf
FDA: "Highly effective, more personalized therapy"
PDC: "199 deaths attributed to Zelboraf trial!"
There is a fair body of evidence that participants in clinical trials fare about the same as (or possibly a bit better than) similar patients receiving standard of care therapy.  However, much of that evidence was accumulated in western countries: it is a fair question to ask if patients in India and other countries receive a similar benefit.  The PDC, however, adds nothing to our ability to answer that question.

So, for publicizing a metric that has zero utility, and using it to cast aspersions on the ethics of researchers, we congratulate Pharmalot and the PDC.