Showing posts with label benefits of clinical trials. Show all posts
Showing posts with label benefits of clinical trials. Show all posts

Monday, November 21, 2016

The first paid research subject in written history?

On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject.

According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man.

Some interesting things to note about this experiment:
  • Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all".
  • Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions
  • Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story).
  • Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. 
Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article:
The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet.
…and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”.

The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea.

[Image credit: Wellcome Images]




Friday, September 21, 2012

Trials in Alzheimer's Disease: The Long Road Ahead

Placebo Control is going purple today in support of Alzheimer’s Action Day.

A couple of clinical trial related thoughts on the ongoing struggle to find even one effective therapy (currently-approved drugs show some ability to slow the progression of AD, but not to effectively stop, much less reverse it):
  • The headlines so far this year have been dominated by the high-profile and incredibly expensive failures of bapineuzumab and solanezumab. However, these two are just the most recent of a long series of failures: a recent industry report tallies 101 investigational drugs that that have failed clinical trials or been suspended in development since 1998, against only 3 successes, an astonishing and painful 34:1 failure rate.

  • While we are big fans of the Alzhemier’s Association (just down the street from Placebo HQ here in Chicago) and the Alzheimer’s Foundation of America, it’s important to stress that the single most important contribution that patients and caregivers can make is to get involved in a clinical trial. That same report lists 93 new treatments currently being evaluated.  As of today, the US clinical trials registry lists 124 open trials for AD.  Many of these studies only require a few hundred participants, so each individual decision to enroll is important and immediately visible.

  • While all research is important, I want to single out the phenomenal work being done by ADNI, the Alzheimer’s Disease Neuroimaging Initiative. This is a public/private partnership that is
    collecting a vast amount of data – blood, cerebrospinal fluid, MRIs, and PET scans – on hundreds of AD patients and matched controls. Best of all, all of the data collected is published in a free, public database hosted by UCLA. Additional funding has recently led to the development of the ADNI-2 study, which will enroll 550 more participants.
Without a doubt, finding and testing effective medications for Alzheimer's Disease is going to take many more years of hard, frustrating work. It will be a path littered with many more failures and therapeutic dead-ends. Today's a good day to stop and recognize that fact, and strengthen our resolve to work together to end this disease.

Friday, September 14, 2012

Clinical trials: recent reading recommendations

My recommended reading list -- highlights from the past week:


Absolute required reading for anyone who designs protocols or is engaged in recruiting patients into clinical trials: Susan Guber writes eloquently about her experiences as a participant in cancer clinical trials.
New York Times Well Blog: The Trials of Cancer Trials
Today's #FDAFridayPhoto features Harvey
Wiley, leader of the famed FDA "Poison Squad".

The popular press in India continues to be disingenuous and exploitative in its coverage of clinical trial deaths in that country. (My previous thoughts on that are here.) Kiran Mazumdar-Shaw, an industry leader, has put together an intelligent and articulate antidote.
The Economic Times: Need a rational view on clinical trials


Rahlen Gossen exhibits mastery of the understatement: “Though the Facebook Insights dashboard is a great place to start, it has a few significant disadvantages.” She also provides a good overview of the most common pitfalls you’ll encounter when you try to get good metrics out of your Facebook campaign. 


I have not had a chance to watch it yet, but I’m excited to see that theHeart.org has just posted a 7-part video editorial series by Yale’s Harlan Krumholz and Duke Stanford’s Bob Harrington on “a frank discussion on the controversies in the world of clinical trials”. 

Monday, August 27, 2012

"Guinea Pigs" on CBS is Going to be Super Great, I Can Just Tell


An open letter to Mad Men producer/writer Dahvi Waller

Dear Dahvi,

I just wanted to drop you a quick note of congratulations when I heard through the grapevine that CBS has signed you on to do a pilot episode of your new medical drama, Guinea Pigs (well actually, I heard it from the Hollywood Reporter; the grapevine doesn’t tell me squat). According to the news item,
The drama centers on group of trailblazing doctors who run clinical trials at a hospital in Philadelphia. The twist: The trials are risky, and the guinea pigs are human.
Probably just like this, but
with a bigger body count.
(Sidenote: that’s quite the twist there! For a minute, I thought this was going to be the first ever rodent-based prime time series!)

I don’t want to take up too much of your time. I’m sure you’re extremely busy with lots of critical casting decisions, like: will the Evil Big Pharma character be a blonde, beautiful-but-treacherous Ice Queen type in her early 30’s, or an expensively-suited, handsome-but-treacherous Gordon Gekko type in his early 60’s? (My advice: Don’t settle!  Use both! Viewers of all ages can love to hate the pharmaceutical industry!)

About that name, by the way: great choice! I’m really glad you didn’t overthink that one. A good writer should go with her gut and pick the first easy stereotype that pops into her head. (Because the head is never closer to the gut then when it’s jammed firmly up … but I don’t have to explain anatomy to you! You write a medical drama for television!)

I’m sure the couple-three million Americans who enroll in clinical trials each year will totally relate to your calling them guinea pigs. In our industry, we call them heroes, but that’s just corny, right? Real heroes on TV are people with magic powers, not people who contribute to the advancement of medicine.

Anyway, I’m just really excited because our industry is just so, well … boring! We’re so fixated on data collection regulations and safety monitoring and ethics committee reviews and yada yada yada – ugh! Did you know we waste 5 to 10 years on this stuff, painstakingly bringing drugs through multiple graduated phases of testing in order to produce a mountain of data (sometimes running over 100,000 pages long) for the FDA to review?

Dahvi Waller: bringing CSI
to clinical research
I’m sure you’ll be giving us the full CSI-meets-Constant-Gardener treatment, though, and it will all seem so incredibly easy that your viewers will wonder what the hell is taking us so long to make these great new medicines. (Good mid-season plot point: we have the cure for most diseases already, but they’ve been suppressed by a massive conspiracy of sleazy corporations, corrupt politicians, and inept bureaucrats!)

Anyway, best of luck to you! I can't wait to see how accurately and respectfully you treat the work of the research biologists and chemists, physician investigators, nurses, study coordinators, monitors, reviewers, auditors, and patient volunteers guinea pigs who are working hard to ensure the next generation of medicines are safe and effective.  What can go wrong? It's television!




Wednesday, August 22, 2012

The Case against Randomized Trials is, Fittingly, Anecdotal


I have a lot of respect for Eric Topol, and am a huge fan of his ongoing work to bring new mobile technology to benefit patients.

The Trial of the Future
However, I am simply baffled by this short video he recently posted on his Medscape blog. In it, he argues against the continued use of randomized controlled trials (RCTs) to provide evidence for or against new drugs.

His argument for this is two anecdotes: one negative, one positive. The negative anecdote is about the recently approved drug for melanoma, Zelboraf:
Well, that's great if one can do [RCTs], but often we're talking about needing thousands, if not tens of thousands, of patients for these types of clinical trials. And things are changing so fast with respect to medicine and, for example, genomically guided interventions that it's going to become increasingly difficult to justify these very large clinical trials. 
For example, there was a drug trial for melanoma and the mutation of BRAF, which is the gene that is found in about 60% of people with malignant melanoma. When that trial was done, there was a placebo control, and there was a big ethical charge asking whether it is justifiable to have a body count. This was a matched drug for the biology underpinning metastatic melanoma, which is essentially a fatal condition within 1 year, and researchers were giving some individuals a placebo.
First and foremost, this is simply factually incorrect on a couple extremely important points.

  1. Zelboraf was not approved based on any placebo-controlled trials. The phase 1 and phase 2 trials were both single-arm, open label studies. The only phase 3 trial run before FDA approval used dacarbazine in the comparator arm. In fact, of the 34 trials currently listed for Zelboraf on ClinicalTrials.gov, only one has a placebo control: it’s an adjuvant trial for patients whose melanoma has been completely resected, where no treatment may very well be the best option.
  2. The Zelboraf trials are not an example of “needing thousands, if not tens of thousands, of patients” for approval. The phase 3 trial enrolled 675 patients. Even adding the phase 1 and 2 trials doesn’t get us to 1000 patients.

Correcting these details take a lot away from the power of this single drug to be a good example of why we should stop using “the sanctimonious [sic] randomized, placebo-controlled clinical trial”.

The second anecdote is about a novel Alzheimer’s Disease candidate:
A remarkable example of a trial of the future was announced in May. For this trial, the National Institutes of Health is working with [Banner Alzheimer's Institute] in Arizona, the University of Antioquia in Colombia, and Genentech to have a specific mutation studied in a large extended family living in the country of Colombia in South America. There is a family of 8000 individuals who have the so-called Paisa mutation, a presenilin gene mutation, which results in every member of this family developing dementia in their 40s. 
Researchers will be testing a drug that binds amyloid, a monoclonal antibody, in just 300 family members. They're not following these patients out to the point of where they get dementia. Instead, they are using surrogate markers to see whether or not the process of developing Alzheimer's can be blocked using this drug. This is an exciting way in which we can study treatments that can potentially prevent Alzheimer's in a very well-demarcated, very restricted population with a genetic defect, and then branch out to a much broader population of people who are at risk for Alzheimer's. These are the types of trials of the future. 
There are some additional disturbing factual errors here – the extended family numbers about 5,000, not 8,000. And estimates of the prevalence of the mutation within that family appear to vary from about one-third to one-half, so it’s simply wrong to state that “every member of this family” will develop dementia.

However, those errors are relatively minor, and are completely overshadowed by the massive irony that this is a randomized, placebo-controlled trial. Only 100 of the 300 trial participants will receive the active study drug, crenezumab. The other 200 will be on placebo.

And so, the “trial of the future” held up as a way to get us out of using randomized, placebo-controlled trials is actually a randomized, placebo-controlled trial itself. I hope you can understand why I’m completely baffled that Topol thinks this is evidence of anything.

Finally, I have to ask: how is this the trial of the future, anyway? It is a short-term study on a highly-selected patient population with a specific genetic profile, measuring surrogate markers to provide proof of concept for later, larger studies. Is it just me, or does that sound exactly like the early lovastatin trials of the mid-1980’s, which tested cholesterol reduction in a small population of patients with severe heterozygous familial hypercholesterolemia? Back to the Future, indeed.


[Image: time-travelling supercar courtesy of Flickr user JoshBerglund19.]

Tuesday, July 24, 2012

How Not to Report Clinical Trial Data: a Clear Example

I know it’s not even August yet, but I think we can close the nominations for "Worst Trial Metric of the Year".  The hands-down winner is Pharmalot, for the thoughtless publication of this article reviewing "Deaths During Clinical Trials" per year in India.  We’ll call it the Pharmalot Death Count, or PDC, and its easy to explain – it's just the total number of patients who died while enrolled in any clinical trial, regardless of cause, and reported as though it were an actual meaningful number.

(To make this even more execrable, Pharmalot actually calls this "Deaths attributed to clinical trials" in his opening sentence, although the actual data has exactly nothing to do with the attribution of the death.)

In fairness, Pharmalot is really only sharing the honors with a group of sensationalistic journalists in India who have jumped on these numbers.  But it has a much wider readership within the research community, and could have at least attempted to critically assess the data before repeating it (along with criticism from "experts").

The number of things wrong with this metric is a bit overwhelming.  I’m not even sure where to start.  Some of the obvious issues here:

1. No separation of trial-related versus non-trial-related.  Some effort is made to explain that there may be difficulty in determining whether a particular death was related to the study drug or not.  However, that obscures the fact that the PDC lumps together all deaths, whether they took an experimental medication or not. That means the PDC includes:
  • Patients in control arms receiving standard of care and/or placebo, who died during the course of their trial.
  • Patients whose deaths were entirely unrelated to their illness (eg, automobile accident victims)
2. No base rates.  When a raw death total is presented, a number of obvious questions should come to mind:  how many patients were in the trials?  How many deaths were there in patients with similar diseases who were not in trials?  The PDC doesn’t care about that kind of context

3. No sensitivity to trial design.  Many late-stage cancer clinical trials use Overall Survival (OS) as their primary endpoint – patients are literally in the trial until they die.  This isn’t considered unethical; it’s considered the gold standard of evidence in oncology.  If we ran shorter, less thorough trials, we could greatly reduce the PDC – would that be good for anyone?

Case Study: Zelboraf
FDA: "Highly effective, more personalized therapy"
PDC: "199 deaths attributed to Zelboraf trial!"
There is a fair body of evidence that participants in clinical trials fare about the same as (or possibly a bit better than) similar patients receiving standard of care therapy.  However, much of that evidence was accumulated in western countries: it is a fair question to ask if patients in India and other countries receive a similar benefit.  The PDC, however, adds nothing to our ability to answer that question.

So, for publicizing a metric that has zero utility, and using it to cast aspersions on the ethics of researchers, we congratulate Pharmalot and the PDC.

Tuesday, July 10, 2012

Why Study Anything When You Already Know Everything?

If you’re a human being, in possession of one working, standard-issue human brain (and, for the remainder of this post, I’m going to assume you are), it is inevitable that you will fall victim to a wide variety of cognitive biases and mistakes.  Many of these biases result in our feeling much more certain about our knowledge of the world than we have any rational grounds for: from the Availability Heuristic, to the Dunning-Kruger Effect, to Confirmation Bias, there is an increasingly-well-documented system of ways in which we (and yes, that even includes you) become overconfident in our own judgment.

Over the years, scientists have developed a number of tools to help us overcome these biases in order to better understand the world.  In the biological sciences, one of our best tools is the randomized controlled trial (RCT).  In fact, randomization helps minimize biases so well that randomized trials have been suggested as a means of developing better governmental policy.

However, RCTs in general require an investment of time and money, and they need to be somewhat narrowly tailored.  As a result, they frequently become the target of people impatient with the process – especially those who perhaps feel themselves exempt from some of the above biases.

A shining example of this impatience-fortified-by-hubris can be
4 out of 5 Hammer Doctors agree:
the world is 98% nail.
found in a recent “Speaking of Medicine” blog post by Dr Trish Greenhalgh, with the mildly chilling title Less Research is Needed.  In it, the author finds a long list of things she feels to be so obvious that additional studies into them would be frivolous.  Among the things the author knows, beyond a doubt, is that patient education does not work, and electronic medical records are inefficient and unhelpful. 

I admit to being slightly in awe of Dr Greenhalgh’s omniscience in these matters. 

In addition to her “we already know the answer to this” argument, she also mixes in a completely different argument, which is more along the lines of “we’ll never know the answer to this”.  Of course, the upshot of that is identical: why bother conducting studies?  For this argument, she cites the example of coronary artery disease: since a large genomic study found only a small association with CAD heritability, Dr Greenhalgh tells us that any studies of different predictive methods is bound to fail and thus not worth the effort (she specifically mentions “genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables” as things she apparently already knows will not add anything to our understanding of CAD). 

As studies grow more global, and as we adapt to massive increases in computer storage and processing ability, I believe we will see an increase in this type of backlash.  And while physicians can generally be relied on to be at the forefront of the demand for more, not less, evidence, it is quite possible that a vocal minority of physicians will adopt this kind of strongly anti-research stance.  Dr Greenhalgh suggests that she is on the side of “thinking” when she opposes studies, but it is difficult to see this as anything more than an attempt to shut down critical inquiry in favor of deference to experts who are presumed to be fully-informed and bias-free. 

It is worthwhile for those of us engaged in trying to understand the world to be aware of these kinds of threats, and to take them seriously.  Dr Greenhalgh writes glowingly of a 10-year moratorium on research – presumably, we will all simply rely on her expertise to answer our important clinical questions.

Thursday, July 5, 2012

The Placebo Effect (No Placebo Necessary)

4 out of 5 non-doctors recommend starting
with "regular strength", and titrating up from there...
(Photo from inventedbyamother.com)
The modern clinical trial’s Informed Consent Form (ICF) is a daunting document.  It is packed with a mind-numbing litany of procedures, potential risks, possible adverse events, and substantial additional information – in general, if someone, somewhere, might find a fact relevant, then it gets into the form.  A run-of-the-mill ICF in a phase 2 or 3 pharma trial can easily run over 10 pages of densely worded text.  You might argue (and in fact, a number of people have, persuasively) that this sort of information overload reduces, rather than enhances, patient understanding of clinical trials.

So it is a bit of a surprise to read a paper arguing that patient information needs to be expanded because it does not contain enough information.  And it is yet even more surprising to read about what’s allegedly missing: more information about the potential effects of placebo.

Actually, “surprising” doesn’t really begin to cover it.  Reading through the paper is a borderline surreal experience.  The authors’ conclusions from “quantitative analysis”* of 45 Patient Information Leaflets for UK trials include such findings as
  • The investigational medication is mentioned more often than the placebo
  • The written purpose of the trial “rarely referred to the placebo”
  • “The possibility of continuing on the placebo treatment after the trial was never raised explicitly”
(You may need to give that last one a minute to sink in.)

Rather than seeing these as rather obvious conclusions, the authors recast them as ethical problems to be overcome.  From the article:
Information leaflets provide participants with a permanent written record about a clinical trial and its procedures and thus make an important contribution to the process of informing participants about placebos.
And from the PR materials furnished along with publication:
We believe the health changes associated with placebos should be better represented in the literature given to patients before they take part in a clinical trial.
There are two points that I think are important here – points that are sometimes missed, and very often badly blurred, even within the research community:

1.    The placebo effect is not caused by placebos.  There is nothing special about a “placebo” treatment that induces a unique effect.  The placebo effect can be induced by a lot of things, including active medications.  When we start talking about placebos as causal agents, we are engaging in fuzzy reasoning – placebo effects will not only be seen in the placebo arm, but will be evenly distributed among all trial participants.

2.    Changes in the placebo arm cannot be assumed to be caused by the placebo effect.  There are many reasons why we may observe health changes within a placebo group, and most of them have nothing to do with the “psychological and neurological mechanisms” of the placebo effect.  Giving trial participant information about the placebo effect may in fact be providing them with an entirely inaccurate description of what is going on.

ResearchBlogging.org Bishop FL, Adams AEM, Kaptchuk TJ, Lewith GT (2012). Informed Consent and Placebo Effects: A Content Analysis of Information Leaflets to Identify What Clinical Trial Participants Are Told about Placebos. PLoS ONE DOI: 10.1371/journal.pone.0039661  


(* Not related to the point at hand, but I would applaud efforts to establish some lower boundaries to what we are permitted to call "quantitative analysis".  Putting counts from 45 brochures into an Excel spreadsheet should fall well below any reasonable threshold.)

Sunday, March 20, 2011

1st-Person Accounts of Trial Participation

Two intriguing articles on participation in clinical trials were published this week. Both happen to be about breast cancer, but both touch squarely on some universal points:

ABC News features patient Haralee Weintraub, who has enrolled in 5 trials in the past 10 years. While she is unusual for having participated in so many studies, Weintraub’s offers great insights into the barriers and benefits of being in the trial, including the fact that many benefits – such as close follow-up and attention from the treatment team -- are not obvious at first.

Meanwhile, the New York Times’ recurring column from Dr Peter Bach on his wife’s breast cancer offers a moving description of her consent into a trial. His essay focuses mainly on the incremental, slow pace of cancer research (“this arduous slog”) and how it is both incredibly frustrating and absolutely necessary for long-term improvements in treatment.

Wednesday, March 16, 2011

Realistic Optimism in Clinical Trials

The concept of “unrealistic optimism” among clinical trial participants has gotten a fair bit of press lately, mostly due to a small study published in IRB: Ethics and Human Research. (I should stress the smallness of the study: it was a survey given to 72 blood cancer patients. This is worth noting in light of the slightly-bizarre Medscape headline that optimism “plagues” clinical trials.)

I was therefore happy to see this article reporting out of the Society for Surgical Oncology. In looking at breast cancer outcomes between surgical oncologists and general surgeons, the authors appear to have found that most of the beneficial outcomes among patients treated by surgical oncologist can be ascribed to clinical trial participation. Some major findings:
  • 56% of patients treated by a surgical oncologist participated in a trial, versus only 7% of those treated by a general surgeon
  • Clinical trial patients had significantly longer median follow-up than non-participants (44.6 months vs. 38.5 months)
  • Most importantly, clinical trial patients had significantly improved overall survival at 5 years than non-participants (31% vs. 26%)

Of course, the study reported on in the IRB article did not compare non-trial participants’ attitudes, so these aren’t necessarily contradictory results. However, I suspect that the message of “clinical trial participation” entails “better follow-up” entails “improved outcomes” will not get the same eye-catching headline in Medscape. Which is a shame, since we already have enough negative press about clinical trials out there.