Monday, November 21, 2016

The first paid research subject in written history?

On this date 349 years ago, Samuel Pepys relates in his famous diary a remarkable story about an upcoming medical experiment. As far as I can tell, this is the first written description of a paid research subject.

According to his account, the man (who he describes as “a little frantic”) was to be paid to undergo a blood transfusion from a sheep. It was hypothesized that the blood of this calm and docile animal would help to calm the man.

Some interesting things to note about this experiment:
  • Equipoise. There is explicit disagreement about what effect the experimental treatment will have: according to Pepys, "some think it may have a good effect upon him as a frantic man by cooling his blood, others that it will not have any effect at all".
  • Results published. An account of the experiment was published just two weeks later in the journal Philosophical Transactions
  • Medical Privacy. In this subsequent write-up, the research subject is identified as Arthur Coga, a former Cambridge divinity student. According to at least one account, being publicly identified had a bad effect on Coga, as people who had heard of him allegedly succeeded in getting him to spend his stipend on drink (though no sources are provided to confirm this story).
  • Patient Reported Outcome. Coga was apparently chosen because, although mentally ill, he was still considered educated enough to give an accurate description of the treatment effect. 
Depending on your perspective, this may also be a very early account of the placebo effect, or a classic case of ignoring the patient’s experience. Because even though his report was positive, the clinicians remained skeptical. From the journal article:
The Man after this operation, as well as in it, found himself very well, and hath given in his own Narrative under his own hand, enlarging more upon the benefit, he thinks, he hath received by it, than we think fit to own as yet.
…and in fact, a subsequent diary entry from Pepys mentions meeting Coga, with similarly mixed impressions: “he finds himself much better since, and as a new man, but he is cracked a little in his head”.

The amount Coga was paid for his participation? Twenty shillings – at the time, that was exactly one Guinea.

[Image credit: Wellcome Images]

Monday, July 25, 2016

Will Your Family Make You a Better Trial Participant?

It is becoming increasing accepted within the research community that patient engagement leads to a host of positive outcomes – most importantly (at least practically speaking) improved clinical trial recruitment and retention.

But while we can all agree that "patient engagement is good" in a highly general sense, we don't have much consensus on what the implications of that idea might be. There is precious little hard evidence about how to either attract engaged patients, or how we might effectively turn "regular patients" into "engaged patients".

That latter point - that we could improve trial enrollment and completion rates by converting the (very large) pool of less-engaged patient - is a central tenet of the mHealth movement in clinical trials. Since technology can now accompany us almost anywhere, it would seem that we have an unprecedented opportunity to reach out and connect with current and potential trial participants.

However, there are signs that this promised revolution in patient engagement hasn't come about. From the decline of new apps being downloaded to the startlingly high rate of people abandoning their wearable health devices, there's a growing body of evidence suggesting that we aren't in fact making very good progress towards increasing engagement. We appear to have underestimated the inertia of the disengaged patient.

So what can we do? We know people like their technology, but if they're not using it to engage with their healthcare decisions, we're no better off as a result.

Daniel Calvert, in a recent blog post at Parallel 6 offers an intriguing solution: he suggests we go beyond the patient and engage their wider group of loved ones. By engaging what Calvert calls the Support Circle - those people most likely to "encourage the health and well being of that patient as they undergo a difficult period of their life" - trial teams will find themselves with a more supported, and therefore more engaged, participant, with corresponding benefits to enrollment and retention. 

Calvert outlines a number of potential mechanisms to get spouses, children, and other loved ones involved in the trial process:
During the consent process the patient can invite their support team in with them. A mobile application can be put on their phones enabling encouraging messages, emails, and texts to be sent. Loved ones can see if their companion or family member did indeed take today’s medication or make last Monday’s appointment. Gamification offers badges or pop-ups: “Two months of consecutive appointments attended” or “perfect eDiary log!” Loved ones can see those notifications, like/comment, and constantly encourage the patients. 
Supporting materials can also be included in the Support Circle application. There are a host of unknown terms to patients and their team. Glossaries, videos, FAQs, contact now, and so much more can be made available at their fingertips.
I have to admit I'm fascinated by Calvert's idea. I want him to be right: the picture of supportive, encouraging, loving spouses and children standing by to help a patient get through a clinical trial is an attractive one. So is the idea that they're just waiting for us to include them - all we need to do is a bit of digital communication with them to get them fully on board as members of the study team.

The problem, however, remains: we have absolutely no evidence that this approach will work. There is no data showing that it is superior to other approaches to engage trial patients.

(In fact, we may even have some indirect evidence that it may hinder enrollment: in trials that require active caregiver participation, such as those in Alzheimer's Disease, caregivers are believed to often contribute to the barriers to patient enrollment).

Calvert's idea is a good one, and it's worthy of consideration. More importantly, it's worthy of being rigorously tested against other recruitment and retention approaches. We have a lot of cool new technologies, and even more great ideas - we're not lacking for those. What we're lacking is hard data showing us how these things perform. What we especially need is comparative data showing how new tactics work relative to other approaches.

Over 5 years ago, I wrote a blog post bemoaning the sloppy approaches we take in trial recruitment - a fact made all the more painfully ironic by the massive intellectual rigor of the trials themselves. I'm not at all sure that we've made any real progress in those 5 years.

In my next post, I'll outline what I believe are some of the critical steps we need to take to improve the current situation, and start bringing some solid evidence to the table along with our ideas.

[Photo credit: Flikr user Matthew G, "Love (of technology)"]