Monday, November 21, 2016

The first paid research subject in written history?

On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject.

According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man.

Some interesting things to note about this experiment:
  • Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all".
  • Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions
  • Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story).
  • Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. 
Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article:
The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet.
…and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”.

The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea.

[Image credit: Wellcome Images]

Monday, July 25, 2016

Will Your Family Make You a Better Trial Participant?

It is becoming increasing accepted within the research community that patient engagement leads to a host of positive outcomes – most importantly (at least practically speaking) improved clinical trial recruitment and retention.

But while we can all agree that "patient engagement is good" in a highly general sense, we don't have much consensus on what the implications of that idea might be. There is precious little hard evidence about how to either attract engaged patients, or how we might effectively turn "regular patients" into "engaged patients".

That latter point - that we could improve trial enrollment and completion rates by converting the (very large) pool of less-engaged patient - is a central tenet of the mHealth movement in clinical trials. Since technology can now accompany us almost anywhere, it would seem that we have an unprecedented opportunity to reach out and connect with current and potential trial participants.

However, there are signs that this promised revolution in patient engagement hasn't come about. From the decline of new apps being downloaded to the startlingly high rate of people abandoning their wearable health devices, there's a growing body of evidence suggesting that we aren't in fact making very good progress towards increasing engagement. We appear to have underestimated the inertia of the disengaged patient.

So what can we do? We know people like their technology, but if they're not using it to engage with their healthcare decisions, we're no better off as a result.

Daniel Calvert, in a recent blog post at Parallel 6 offers an intriguing solution: he suggests we go beyond the patient and engage their wider group of loved ones. By engaging what Calvert calls the Support Circle - those people most likely to "encourage the health and well being of that patient as they undergo a difficult period of their life" - trial teams will find themselves with a more supported, and therefore more engaged, participant, with corresponding benefits to enrollment and retention. 

Calvert outlines a number of potential mechanisms to get spouses, children, and other loved ones involved in the trial process:
During the consent process the patient can invite their support team in with them. A mobile application can be put on their phones enabling encouraging messages, emails, and texts to be sent. Loved ones can see if their companion or family member did indeed take today’s medication or make last Monday’s appointment. Gamification offers badges or pop-ups: “Two months of consecutive appointments attended” or “perfect eDiary log!” Loved ones can see those notifications, like/comment, and constantly encourage the patients. 
Supporting materials can also be included in the Support Circle application. There are a host of unknown terms to patients and their team. Glossaries, videos, FAQs, contact now, and so much more can be made available at their fingertips.
I have to admit I'm fascinated by Calvert's idea. I want him to be right: the picture of supportive, encouraging, loving spouses and children standing by to help a patient get through a clinical trial is an attractive one. So is the idea that they're just waiting for us to include them - all we need to do is a bit of digital communication with them to get them fully on board as members of the study team.

The problem, however, remains: we have absolutely no evidence that this approach will work. There is no data showing that it is superior to other approaches to engage trial patients.

(In fact, we may even have some indirect evidence that it may hinder enrollment: in trials that require active caregiver participation, such as those in Alzheimer's Disease, caregivers are believed to often contribute to the barriers to patient enrollment).

Calvert's idea is a good one, and it's worthy of consideration. More importantly, it's worthy of being rigorously tested against other recruitment and retention approaches. We have a lot of cool new technologies, and even more great ideas - we're not lacking for those. What we're lacking is hard data showing us how these things perform. What we especially need is comparative data showing how new tactics work relative to other approaches.

Over 5 years ago, I wrote a blog post bemoaning the sloppy approaches we take in trial recruitment - a fact made all the more painfully ironic by the massive intellectual rigor of the trials themselves. I'm not at all sure that we've made any real progress in those 5 years.

In my next post, I'll outline what I believe are some of the critical steps we need to take to improve the current situation, and start bringing some solid evidence to the table along with our ideas.

[Photo credit: Flikr user Matthew G, "Love (of technology)"]

Tuesday, July 14, 2015

Waiver of Informed Consent - proposed changes in the 21st Century Cures Act

Adam Feuerstein points out - and expresses considerable alarm over - an overlooked clause in the 21st Century Cures Act:

In another tweet, he suggests that the act will "decimate" informed consent in drug trials. Subsequent responses and retweets  did nothing to clarify the situation, and if anything tended to spread, rather than address, Feuerstein's confusion.

Below is a quick recap of the current regulatory context and a real-life example of where the new wording may be helpful. In short, though, I think it's safe to say:

  1. Waiving informed consent is not new; it's already permitted under current regs
  2. The standards for obtaining a waiver of consent are stringent
  3. They may, in fact, be too stringent in a small number of situations
  4. The act may, in fact, be helpful in those situations
  5. Feuerstein may, in fact, need to chill out a little bit

(For the purposes of this discussion, I’m talking about drug trials, but I believe the device trial situation is parallel.)

Section 505(i) - the section this act proposes to amend - instructs the Secretary of Health and Human Services to propagate rules regarding clinical research. Subsection 4 addresses informed consent:

…the manufacturer, or the sponsor of the investigation, require[e] that experts using such drugs for investigational purposes certify to such manufacturer or sponsor that they will inform any human beings to whom such drugs, or any controls used in connection therewith, are being administered, or their representatives, that such drugs are being used for investigational purposes and will obtain the consent of such human beings or their representatives, except where it is not feasible or it is contrary to the best interests of such human beings.

[emphasis  mine]

Note that this section already recognizes situations where informed consent may be waived for practical or ethical reasons.

These rules were in fact promulgated under 45 CFR part 46, section 116. The relevant bit – as far as this conversation goes – regards circumstances under which informed consent might be fully or partially waived. Specifically, there are 4 criteria, all of which need to be met:

 (1) The research involves no more than minimal risk to the subjects;
 (2) The waiver or alteration will not adversely affect the rights and welfare of the subjects;
 (3) The research could not practicably be carried out without the waiver or alteration; and
 (4) Whenever appropriate, the subjects will be provided with additional pertinent information after participation.

In practice, this is an especially difficult set of criteria to meet for most studies. Criterion (1) rules out most “conventional” clinical trials, because the hallmarks of those trials (use of an investigational medicine, randomization of treatment, blinding of treatment allocation) are all deemed to be more than “minimal risk”. That leaves observational studies – but even many of these cannot clear the bar of criterion (3).

That word “practicably” is a doozy.

Here’s an all-too-real example from recent personal experience. A drug manufacturer wants to understand physicians’ rationales for performing a certain procedure. It seems – but there is little hard data – that a lot of physicians do not strictly follow guidelines on when to perform the procedure. So we devise a study: whenever the procedure is performed, we ask the physician to complete a quick form categorizing why they made their decision. We also ask him or her to transcribe a few pieces of data from the patient chart.

Even though the patients aren’t personally identifiable, the collection of medical data qualifies this as a clinical trial.

It’s a minimal risk trial, definitely: the trial doesn’t dictate at all what the doctor should do, it just asks him or her to record what they did and why, and supply a bit of medical context for the decision. All told, we estimated 15 minutes of physician time to complete the form.

The IRB monitoring the trial, however, denied our request for a waiver of informed consent, since it was “practicable” (not easy, but possible) to obtain informed consent from the patient.  Informed consent – even with a slimmed-down form – was going to take a minimum of 30 minutes, so the length of the physician’s involvement tripled. In addition, many physicians opted out of the trial because they felt that the informed consent process added unnecessary anxiety and alarm for their patients, and provided no corresponding benefit.

The end result was not surprising: the budget for the trial more than doubled, and enrollment was far below expectations.

Which leads to two questions:

1.       Did the informed consent appreciably help a single patient in the trial? Very arguably, no. Consenting to being “in” the trial made zero difference in the patients’ care, added time to their stay in the clinic, and possibly added to their anxiety.
2.       Was less knowledge collected as a result? Absolutely, yes. The sponsor could have run two studies for the same cost. Instead, they ultimately reduced the power of the trial in order to cut losses.

Bottom line, it appears that the modifications proposed in the 21st Century Cures Act really only targets trials like the one in the example. The language clearly retains criteria 1 and 2 of the current HHS regs, which are the most important from a patient safety perspective, but cuts down the “practicability” requirement, potentially permitting high quality studies to be run with less time and cost.

Ultimately, it looks like a very small, but positive, change to the current rules.

The rest of the act appears to be a mash-up of some very good and some very bad (or at least not fully thought out) ideas. However, this clause should not be cause for alarm.