Showing posts with label informed consent. Show all posts
Showing posts with label informed consent. Show all posts

Tuesday, July 14, 2015

Waiver of Informed Consent - proposed changes in the 21st Century Cures Act

Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act:

In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets  did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion.

Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say:

  1. Waiving informed consent is not new; it's already permitted under current regs
  2. The standards for obtaining a waiver of consent are stringent
  3. They may, in fact, be too stringent in a small number of situations
  4. The act may, in fact, be helpful in those situations
  5. Feuerstein may, in fact, need to chill out a little bit

(For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.)

Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent:

…the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings.

[emphasis  mine]

Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons.

These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met:

 (1) The research involves no more than minimal risk to the subjects;
 (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
 (3) The research could not practicably be carried out without the waiver or alteration; and
 (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3).

That word “practicably” is a doozy.

Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart.

Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial.

It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form.

The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient.  Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit.

The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations.

Which leads to two questions:

1.       Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety.
2.       Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses.

Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost.

Ultimately, it looks like a very small, but positive, change to the current rules.

The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm.

Friday, September 14, 2012

Clinical trials: recent reading recommendations

My recommended reading list -- highlights from the past week:

Absolute required reading for anyone who designs protocols or is engaged in recruiting patients into clinical trials: Susan Guber writes eloquently about her experiences as a participant in cancer clinical trials.
New York Times Well Blog: The Trials of Cancer Trials
Today's #FDAFridayPhoto features Harvey
Wiley, leader of the famed FDA "Poison Squad".

The popular press in India continues to be disingenuous and exploitative in its coverage of clinical trial deaths in that country. (My previous thoughts on that are here.) Kiran Mazumdar-Shaw, an industry leader, has put together an intelligent and articulate antidote.
The Economic Times: Need a rational view on clinical trials

Rahlen Gossen exhibits mastery of the understatement: “Though the Facebook Insights dashboard is a great place to start, it has a few significant disadvantages.” She also provides a good overview of the most common pitfalls you’ll encounter when you try to get good metrics out of your Facebook campaign. 

I have not had a chance to watch it yet, but I’m excited to see that has just posted a 7-part video editorial series by Yale’s Harlan Krumholz and Duke Stanford’s Bob Harrington on “a frank discussion on the controversies in the world of clinical trials”. 

Wednesday, August 8, 2012

Testing Transparency with the TEST Act

A quick update on my last post regarding the enormously controversial -- but completely unmentioned -- requirement to publicly report all versions of clinical trial protocols on The New England Journal of Medicine has weighed in with an editorial strongly in support of the TEST Act. 

NEJM Editor-in-Chief Jeffrey Drazen at least mentions the supporting documents requirement, but only in part of one sentence, where he confusingly refers to the act "extending results reporting to include the deposition of consent and protocol documents approved by institutional review boards." The word "deposition" does not suggest actual publication, which the act clearly requires. 

I don't think this qualifies as an improvement in transparency about the impact the TEST Act, as written, would have. I'm not surprised when a trade publication like Center Watch recycles a press release into a news item. However, it wouldn't seem like too much to ask that NEJM editorials aspire to a moderately higher standard of critical inquiry.

Monday, August 6, 2012

Public Protocols? Burying the lede on the TEST Act

Not to be confused with the Test Act.
(via Luminarium)
4 Democratic members of Congress recently co-sponsored the TEST (Trial and Experimental Studies Transparency) Act, which is intended to expand the scope of mandatory registration of clinical trials. Coverage so far has been light, and mainly consists of uncritical recycling of the press release put out by congressman Markey’s office.

Which is unfortunate, because nowhere in that release is there a single mention of the bill’s most controversial feature: publication of clinical trial "supporting documents", including the patient’s Informed Consent Form (ICF) and, incredibly, the entire protocol (including any and all subsequent amendments to the protocol).

How Rep. Markey and colleagues managed to put out a 1,000-word press release without mentioning this detail is nothing short of remarkable. Is the intent to try to sneak this through?

Full public posting of every clinical trial protocol would represent an enormous shift in how R&D is conducted in this country (and, therefore, in the entire world). It would radically alter the dynamics of how pharmaceutical companies operate by ripping out a giant chunk of every company’s proprietary investment – essentially, confiscating and nationalizing their intellectual property. 

Maybe, ultimately, that would be a good thing.  But that’s by no means clear ... and quite likely not true. Either way, however, this is not the kind of thing you bury in legislation and hope no one notices.

[Full text of the bill is here (PDF).]

[UPDATE May 17, 2013: Apparently, the irony of not being transparent with the contents of your transparency law was just too delicious to pass up, as Markey and his co-sponsors reintroduced the bill yesterday. Once again, the updated press release makes no mention of the protocol requirement.]

Friday, July 6, 2012

A placebo control is not a placebo effect

Following up on yesterday's post regarding a study of placebo-related information, it seems worthwhile to pause and expand on the difference between placebo controls and placebo effects.

The very first sentence of the study paper reflects a common, and rather muddled, belief about placebo-controlled trials:
Placebo groups are used in trials to control for placebo effects, i.e. those changes in a person's health status that result from the meaning and hope the person attributes to a procedure or event in a health care setting.
The best I can say about the above sentence is that in some (not all) trials, this accounts for some (not all) of the rationale for including a placebo group in the study design. 

There is no evidence that “meaning and hope” have any impact on HbA1C levels in patients with diabetes. The placebo effect only goes so far, and certainly doesn’t have much sway over most lab tests.  And yet we still conduct placebo-controlled trials in diabetes, and rightly so. 

To clarify, it may be helpful to break this into two parts:
  1. Most trials need a “No Treatment” arm. 
  2. Most “No Treatment” arms should be double-blind, which requires use of a placebo.
Let’s take these in order.

We need a “No Treatment” arm:
  • Where the natural progression of the disease is variable (e.g., many psychological disorders, such as depression, have ups and downs that are unrelated to treatment).  This is important if we want to measure the proportion of responders – for example, what percentage of diabetes patients got their HbA1C levels below 6.5% on a particular regimen.  We know that some patients will hit that target even without additional intervention, but we won’t know how many unless we include a control group.
  • Where the disease is self-limiting.  Given time, many conditions – the flu, allergies, etc. – tend to go away on their own.  Therefore, even an ineffective medication will look like it’s doing something if we simply test it on its own.  We need a control group to measure whether the investigational medication is actually speeding up the time to cure.
  • When we are testing the combination of an investigational medication with one or more existing therapies. We have a general sense of how well metformin will work in T2D patients, but the effect will vary from trial to trial.  So if I want to see how well my experimental therapy works when added to metformin, I’ll need a metformin-plus-placebo control arm to be able to measure the additional benefit, if any.

All of the above are especially important when the trial is selecting a group of patients with greater disease severity than average.  The process of “enriching” a trial by excluding patients with mild disease has the benefit of requiring many fewer enrolled patients to demonstrate a clinical effect.  However, it also will have a stronger tendency to exhibit “regression to the mean” for a number of patients, who will exhibit a greater than average improvement during the course of the trial.  A control group accurately measures this regression and helps us measure the true effect size.

So, why include a placebo?  Why not just have a control group of patients receiving no additional treatment?  There are compelling reasons:
  • To minimize bias in investigator assessments.  We most often think about placebo arms in relation to patient expectations, but often they are even more valuable in improving the accuracy of physician assessments.  Like all humans, physician investigators interpret evidence in light of their beliefs, and there is substantial evidence that unblinded assessments exaggerate treatment effects – we need the placebo to help maintain investigator blinding.
  • To improve patient compliance in the control arm.  If a patient is clearly not receiving an active treatment, it is often very difficult to keep him or her interested and engaged with the trial, especially if the trial requires frequent clinic visits and non-standard procedures (such as blood draws).  Retention in no-treatment trials can be much lower than in placebo-controlled trials, and if it drops low enough, the validity of any results can be thrown into question.
  • To accurately gauge adverse events.  Any problem(s) encountered are much more likely to be taken seriously – by both the patient and the investigator – if there is genuine uncertainty about whether the patient is on active treatment.  This leads to much more accurate and reliable reporting of adverse events.
In other words, even if the placebo effect didn’t exist, it would still be necessary and proper to conduct placebo-controlled trials.  The failure to separate “placebo control” from “placebo effect” yields some very muddled thinking (which was the ultimate point of my post yesterday).

Thursday, July 5, 2012

The Placebo Effect (No Placebo Necessary)

4 out of 5 non-doctors recommend starting
with "regular strength", and titrating up from there...
(Photo from
The modern clinical trial’s Informed Consent Form (ICF) is a daunting document.  It is packed with a mind-numbing litany of procedures, potential risks, possible adverse events, and substantial additional information – in general, if someone, somewhere, might find a fact relevant, then it gets into the form.  A run-of-the-mill ICF in a phase 2 or 3 pharma trial can easily run over 10 pages of densely worded text.  You might argue (and in fact, a number of people have, persuasively) that this sort of information overload reduces, rather than enhances, patient understanding of clinical trials.

So it is a bit of a surprise to read a paper arguing that patient information needs to be expanded because it does not contain enough information.  And it is yet even more surprising to read about what’s allegedly missing: more information about the potential effects of placebo.

Actually, “surprising” doesn’t really begin to cover it.  Reading through the paper is a borderline surreal experience.  The authors’ conclusions from “quantitative analysis”* of 45 Patient Information Leaflets for UK trials include such findings as
  • The investigational medication is mentioned more often than the placebo
  • The written purpose of the trial “rarely referred to the placebo”
  • “The possibility of continuing on the placebo treatment after the trial was never raised explicitly”
(You may need to give that last one a minute to sink in.)

Rather than seeing these as rather obvious conclusions, the authors recast them as ethical problems to be overcome.  From the article:
Information leaflets provide participants with a permanent written record about a clinical trial and its procedures and thus make an important contribution to the process of informing participants about placebos.
And from the PR materials furnished along with publication:
We believe the health changes associated with placebos should be better represented in the literature given to patients before they take part in a clinical trial.
There are two points that I think are important here – points that are sometimes missed, and very often badly blurred, even within the research community:

1.    The placebo effect is not caused by placebos.  There is nothing special about a “placebo” treatment that induces a unique effect.  The placebo effect can be induced by a lot of things, including active medications.  When we start talking about placebos as causal agents, we are engaging in fuzzy reasoning – placebo effects will not only be seen in the placebo arm, but will be evenly distributed among all trial participants.

2.    Changes in the placebo arm cannot be assumed to be caused by the placebo effect.  There are many reasons why we may observe health changes within a placebo group, and most of them have nothing to do with the “psychological and neurological mechanisms” of the placebo effect.  Giving trial participant information about the placebo effect may in fact be providing them with an entirely inaccurate description of what is going on. Bishop FL, Adams AEM, Kaptchuk TJ, Lewith GT (2012). Informed Consent and Placebo Effects: A Content Analysis of Information Leaflets to Identify What Clinical Trial Participants Are Told about Placebos. PLoS ONE DOI: 10.1371/journal.pone.0039661  

(* Not related to the point at hand, but I would applaud efforts to establish some lower boundaries to what we are permitted to call "quantitative analysis".  Putting counts from 45 brochures into an Excel spreadsheet should fall well below any reasonable threshold.)

Tuesday, June 19, 2012

Pfizer Shocker: Patient Recruitment is Hard

In what appears to be, oddly enough, an exclusive announcement to Pharmalot, Pfizer will be discontinuing its much-discussed “Trial in a box”—a clinical study run entirely from a patient’s home. Study drug and other supplies would be shipped directly to each patient, with consent, communication, and data collection happening entirely via the internet.

The trial piloted a number of innovations, including some novel and intriguing Patient Reported Outcome (PRO) tools.  Unfortunately, most of these will likely not have been given the benefit of a full test, as the trial was killed due to low patient enrollment.

The fact that a trial designed to enroll less than 300 patients couldn’t meet its enrollment goal is sobering enough, but in this case the pain is even greater due to the fact that the study was not limited to site databases and/or catchment areas.  In theory, anyone with overactive bladder in the entire United States was a potential participant. 

And yet, it didn’t work.  In a previous interview with Pharmalot, Pfizer’s Craig Lipset mentions a number of recruitment channels – he specifically cites Facebook, Google, Patients Like Me, and Inspire, along with other unspecified “online outreach” – that drove “thousands” of impressions and “many” registrations, but these did not amount to, apparently, even close to the required number of consented patients. 

Two major questions come to mind:

1.    How were patients “converted” into the study?  One of the more challenging aspects of patient recruitment is often getting research sites engaged in the process.  Many – perhaps most – patients are understandably on the fence about being in a trial, and the investigator and study coordinator play the single most critical role in helping each patient make their decision. You cannot simply replace their skill and experience with a website (or “multi-media informed consent module”). 

2.    Did they understand the patient funnel?  I am puzzled by the mention of “thousands of hits” to the website.  That may seem like a lot, if you’re not used to engaging patients online, but it’s actually not necessarily so. 
Jakob Nielsen's famous "Lurker Funnel"
seems worth mentioning here...
Despite some of the claims made by patient communities, it is perfectly reasonable to expect that less than 1% of visitors (even somewhat pre-qualified visitors) will end up consenting into the study.  If you’re going to rely on the internet as your sole means of recruitment, you should plan on needing closer to 100,000 visitors (and, critically: negotiate your spending accordingly). 

In the prior interview, Lipset says:
I think some of the staunch advocates for using online and social media for recruitment are still reticent to claim silver bullet status and not use conventional channels in parallel. Even the most aggressive and bullish social media advocates, generally, still acknowledge you’re going to do this in addition to, and not instead of more conventional channels.

This makes Pfizer’s exclusive reliance on these channels all the more puzzling.  If no one is advocating disintermediating the sites and using only social media, then why was this the strategy?

I am confident that someone will try again with this type of trial in the near future.  Hopefully, the Pfizer experience will spur them to invest in building a more rigorous recruitment strategy before they start.

[Update 6/20: Lipset weighed in via the comments section of the Pharmalot article above to clarify that other DTP aspects of the trial were tested and "worked VERY well".  I am not sure how to evaluate that clarification, given the fact that those aspects couldn't have been tested on a very large number of patients, but it is encouraging to hear that more positive experiences may have come out of the study.]

Sunday, March 20, 2011

1st-Person Accounts of Trial Participation

Two intriguing articles on participation in clinical trials were published this week. Both happen to be about breast cancer, but both touch squarely on some universal points:

ABC News features patient Haralee Weintraub, who has enrolled in 5 trials in the past 10 years. While she is unusual for having participated in so many studies, Weintraub’s offers great insights into the barriers and benefits of being in the trial, including the fact that many benefits – such as close follow-up and attention from the treatment team -- are not obvious at first.

Meanwhile, the New York Times’ recurring column from Dr Peter Bach on his wife’s breast cancer offers a moving description of her consent into a trial. His essay focuses mainly on the incremental, slow pace of cancer research (“this arduous slog”) and how it is both incredibly frustrating and absolutely necessary for long-term improvements in treatment.

Wednesday, March 16, 2011

Realistic Optimism in Clinical Trials

The concept of “unrealistic optimism” among clinical trial participants has gotten a fair bit of press lately, mostly due to a small study published in IRB: Ethics and Human Research. (I should stress the smallness of the study: it was a survey given to 72 blood cancer patients. This is worth noting in light of the slightly-bizarre Medscape headline that optimism “plagues” clinical trials.)

I was therefore happy to see this article reporting out of the Society for Surgical Oncology. In looking at breast cancer outcomes between surgical oncologists and general surgeons, the authors appear to have found that most of the beneficial outcomes among patients treated by surgical oncologist can be ascribed to clinical trial participation. Some major findings:
  • 56% of patients treated by a surgical oncologist participated in a trial, versus only 7% of those treated by a general surgeon
  • Clinical trial patients had significantly longer median follow-up than non-participants (44.6 months vs. 38.5 months)
  • Most importantly, clinical trial patients had significantly improved overall survival at 5 years than non-participants (31% vs. 26%)

Of course, the study reported on in the IRB article did not compare non-trial participants’ attitudes, so these aren’t necessarily contradictory results. However, I suspect that the message of “clinical trial participation” entails “better follow-up” entails “improved outcomes” will not get the same eye-catching headline in Medscape. Which is a shame, since we already have enough negative press about clinical trials out there.

Tuesday, March 1, 2011

What is the Optimal Rate of Clinical Trial Participation?

The authors of EDICT's white paper, in their executive summary, take a bleak view of the current state of clinical trial accrual:

Of critical concern is the fact that despite numerous years of discussion and the implementation of new federal and state policies, very few Americans actually take part in clinical trials, especially those at greatest risk for disease. Of the estimated 80,000 clinical trials that are conducted every year in the U.S., only 2.3 million Americans take part in these research studies -- or less than one percent of the entire U.S. population.
The paper goes on to discuss the underrepresentation of minority populations in clinical trials, and does not return to this point. And while it's certainly not central to the paper's thesis (in fact, in some ways it works against it), it is a perception that certainly appears to a common one among those involved in clinical research.

When we say that "only" 2.3 million Americans take part in clinical research, we rely directly on an assumption that more than 2.3 million Americans should take part.

This leads immediately to the question: how many more?

If we are trying to increase participation rates, the magnitude of the desired improvement is one of the first and most central facts we need. Do we want a 10% increase, or a 10-fold increase? The steps required to achieve these will be radically different, so it would seem important to know.

It should also be pointed out: in some very real sense, the ideal rate of clinical trial participation, at least for pre-marketing trials, is 0%. Participating in these trial by definition means being potentially exposed to a treatment that the FDA believes has insufficient evidence of safety and/or efficacy. In an ideal world, we would not expose any patient to that risk. Even in today's non-ideal world, we have already decided not to expose any patients to medication that have not produced some preliminary evidence of safety and efficacy in animals. That is, we have already established one threshold below which we believe human involvement is unacceptably risky -- in a better world, with more information, we would raise that threshold much higher than the current criteria for IND approval.

This is not just a hypothetical concern. Where we set our threshold for acceptable risk should drive much of our thinking about how much we want to encourage (or discourage) people from shouldering that risk. Landmine detection, for example, is a noble but risky profession: we may agree that it is acceptable for rational adults to choose to enter into that field, and we may certainly applaud their heroism. However, that does not mean that we will unanimously agree on how many adults should be urged to join their ranks, nor does it mean that we will not strive and hope for the day that no human is exposed to that risk.

So, we're not talking about the ideal rate of participation, we're talking about the optimal rate. How many people should get involved, given a) the risks involved in being exposed to investigational treatment, against b) the potential benefit to the participant and/or mankind? For how many will the expected potential benefit outweigh the expected total cost? I have not seen any systematic attempt to answer this question.

The first thing that should be obvious here is that the optimal rate of participation should vary based upon the severity of the disease and the available, approved medications to treat it. In nonserious conditions (eg, keratosis pilaris), and/or conditions with a very good recovery rate (eg, veisalgia), we should expect participation rates to be low, and in some cases close to zero in the absence of major potential benefit. Conversely, we should desire higher participation rates in fatal conditions with few if any legitimate treatment alternatives (eg, late-stage metastatic cancers). In fact, if we surveyed actual participation rates by disease severity and prognosis, I think we would find that this relationship generally holds true already.

I should qualify the above by noting that it really doesn't apply to a number of clinical trial designs, most notably observational trials and phase 1 studies in healthy volunteers. Of course, most of the discussion around clinical trial participation does not apply to these types of trials, either, as they are mostly focused on access to novel treatments.