Showing posts with label DTP. Show all posts
Showing posts with label DTP. Show all posts

Tuesday, May 23, 2017

REMOTE Redux: DTP trials are still hard

Maybe those pesky sites are good for something after all. 

It's been six years since Pfizer boldly announced the launch of its "clinical trial in a box". The REMOTE trial was designed to be entirely online, and involved no research sites: study information and consent was delivered via the web, and medications and diaries were shipped directly to patients' homes.

Despite the initial fanfare, within a month REMOTE's registration on ClinicalTrials.gov was quietly reduced from 600 to 283. The smaller trial ended not with a bang but a whimper, having randomized only 18 patients in over a year of recruiting.

Still, the allure of direct to patient clinical trials remains strong, due to a confluence of two factors. First, a frenzy of interest in running "patient centric clinical trials". Sponsors are scrambling to show they are doing something – anything – to show they have shifted to a patient-centered mindset. We cannot seem to agree what this means (as a great illustration of this, a recent article in Forbes on "How Patients Are Changing Clinical Trials" contained no specific examples of actual trials that had been changed by patients), but running a trial that directly engages patients wherever they are seems like it could work.

The less-openly-discussed other factor leading to interest in these DIY trials is sponsors' continuing willingness to heap almost all of the blame for slow-moving studies onto their research sites. If it’s all the sites’ fault – the reasoning goes – then cutting them out of the process should result in trials that are both faster and cheaper. (There are reasons to be skeptical about this, as I have discussed in the past, but the desire to drop all those pesky sites is palpable.)

However, while a few proof-of-concept studies have been done, there really doesn't seem to have been another trial to attempt a full-blown direct-to-patient clinical trial. Other pilots have been more successful, but had fairly lightweight protocols. For all its problems, REMOTE was a seriously ambitious project that attempted to package a full-blown interventional clinical trial, not an observational study.

In this context, it's great to see published results of the TAPIR Trial in vasculitis, which as far as I can tell is the first real attempt to run a DIY trial of a similar magnitude to REMOTE.

TAPIR was actually two parallel trials, identical in every respect except for their sites: one trial used a traditional group of 8 sites, while the other was virtual and recruited patients from anywhere in the country. So this was a real-time, head-to-head assessment of site performance.

And the results after a full two years of active enrollment?
  • Traditional sites: 49 enrolled
  • Patient centric: 10 enrolled
Even though we’re six years later, and online/mobile communications are even more ubiquitous, we still see the exact same struggle to enroll patients.

Maybe it’s time to stop blaming the sites? To be fair, they didn’t exactly set the world on fire – and I’m guessing the total cost of activating the 8 sites significantly exceeded the costs of setting up the virtual recruitment and patient logistics. But still, the site-less, “patient centric” approach once again came up astonishingly short.


ResearchBlogging.org Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, Davis T, Kullman J, Carette S, Khalidi N, Koening C, Langford CA, Monach P, Moreland L, Pagnoux C, Specks U, Sreih AG, Ytterberg S, Merkel PA, & Vasculitis Clinical Research Consortium. (2017). Experience With Direct-to-Patient Recruitment for Enrollment Into a Clinical Trial in a Rare Disease: A Web-Based Study. Journal of medical Internet research, 19 (2) PMID: 28246067

Friday, November 16, 2012

The Accuracy of Patient Reported Diagnoses


Novelist Phillip Roth recently got embroiled in a small spat with the editors of Wikipedia regarding the background inspiration for one of his books.  After a colleague attempted to correct the entry for The Human Stain on Roth's behalf, he received the following reply from a Wikipedia editor:
I understand your point that the author is the greatest authority on their own work, but we require secondary sources.
Report: 0% of decapitees could
accurately recall their diagnosis
The editor's response, as exasperating as it was to Roth, parallels the prevailing beliefs in clinical research about the value and reliability of Patient Reported Outcomes (PROs). On the one hand, who knows the patient better than the patient? On the other hand, our SOPs require expert physician assessment and diagnosis -- we, too, usually require secondary sources.

While recent FDA guidance has helped to solidify our approaches to incorporating PROs into traditionally-structured clinical trials, there are still a number of open questions about how far we can go with relying exclusively on what patients tell us about their medical conditions.  These questions come to the forefront when we consider the potential of "direct to patient" clinical trials, such as the recently-discontinued REMOTE trial from Pfizer, a pilot study that attempted to assess the feasibility of conducting a clinical trial without the use of local physician investigators.

Among other questions, the REMOTE trial forces us to ask: without physician assessment, how do we know the patients we recruit even have the condition being studied? And if we need more detailed medical data, how easy will it be to obtain from their regular physicians? Unfortunately, that study ended due to lack of enrollment, and Pfizer has not been particularly communicative about any lessons learned.

 Luckily for the rest of us, at least one CRO, Quintiles, is taking steps to methodically address and provide data for some of these questions.  They are moving forward with what appears to be a small series of studies that assess the feasibility and accuracy of information collected in the direct-to-patient arena. Their first step is a small pilot study of 50 patients with self-reported gout, conducted by both Quintiles and Outcomes Health Information Services.  The two companies have jointly published their data in the open-access Journal of Medical Internet Research.

(Before getting into the article's content, let me just emphatically state: kudos to the Quintiles and Outcomes teams for submitting their work to peer review, and to publication in an open access journal. Our industry needs much, much more of this kind of collaboration and commitment to transparency.)

The study itself is fairly straightforward: 50 patients were enrolled (out of 1250 US patients who were already in a Quintiles patient database with self-reported gout) and asked to complete an online questionnaire as well as permit access to their medical records.

The twin goals of the study were to assess the feasibility of collecting the patients' existing medical records and to determine the accuracy of the patients' self-reported diagnosis of gout.

To obtain patients' medical records, the study team used a belt-and-suspenders approach: first, the patients provided an electronic release along with their physicians' contact information. Then, a paper release form was also mailed to the patients, to be used as backup if the electronic release was insufficient.

To me, the results from the attempt at obtaining the medical records is actually the most interesting part of the study, since this is going to be an issue in pretty much every DTP trial that's attempted. Although the numbers are obviously quite small, the results are at least mildly encouraging:

  • 38 Charts Received
    • 28 required electronic release only
    • 10 required paper release
  • 12 Charts Not Received
    • 8 no chart mailed in time
    • 2 physician required paper release, patient did not provide
    • 2 physician refused

If the electronic release had been used on its own, 28 charts (56%) would have been available. Adding the suspenders of a follow-up paper form increased the total to respectable 76%. The authors do not mention how aggressively they pursued obtaining the records from physicians, nor how long they waited before giving up, so it's difficult to determine how many of the 8 charts that went past the deadline could also potentially have been recovered.

Of the 38 charts received, 35 (92%) had direct confirmation of a gout diagnosis and 2 had indirect confirmation (a reference to gout medication).  Only 1 chart had no evidence for or against a diagnosis. So it is fair to conclude that these patients were highly reliable, at least insofar as their report of receiving a prior diagnosis of gout was concerned.

In some ways, though, this represents a pretty optimistic case. Most of these patients had been living with gout for many year, and "gout" is a relatively easy thing to remember.  Patients were not asked questions about the type of gout they had or any other details that might have been checked against their records.

The authors note that they "believe [this] to be the first direct-to-patient research study involving collection of patient-reported outcomes data and clinical information extracted from patient medical records." However, I think it's very worthwhile to bring up comparison with this study, published almost 20 years ago in the Annals of the Rheumatic Diseases.  In that (pre-internet) study, researchers mailed a survey to 472 patients who had visited a rheumatology clinic 6 months previously. They were therefore able to match all of the survey responses with an existing medical record, and compare the patients' self-reported diagnoses in much the same way as the current study.  Studying a more complex set of diseases (arthritis), the 1995 paper paints a more complex picture: patient accuracy varied considerably depending on their disease: from very accurate (100% for those suffering from ankylosing spondylitis, 90% for rheumatoid arthritis) to not very exact at all (about 50% for psoriatic and osteo arthritis).

Interestingly, the Quintiles/Outcomes paper references a larger ongoing study in rheumatoid arthritis as well, which may introduce some of the complexity seen in the 1995 research.

Overall, I think this pilot does exactly what it set out to do: it gives us a sense of how patients and physicians will react to this type of research, and helps us better refine approaches for larger-scale investigations. I look forward to hearing more from this team.


ResearchBlogging.org Cascade, E., Marr, P., Winslow, M., Burgess, A., & Nixon, M. (2012). Conducting Research on the Internet: Medical Record Data Integration with Patient-Reported Outcomes Journal of Medical Internet Research, 14 (5) DOI: 10.2196/jmir.2202



Also cited: I Rasooly, et al., Comparison of clinical and self reported diagnosis for rheumatology outpatients, Annals of the Rheumatic Diseases 1995 DOI:10.1136/ard.54.10.850

Image courtesy Flickr user stevekwandotcom.

Tuesday, June 19, 2012

Pfizer Shocker: Patient Recruitment is Hard

In what appears to be, oddly enough, an exclusive announcement to Pharmalot, Pfizer will be discontinuing its much-discussed “Trial in a box”—a clinical study run entirely from a patient’s home. Study drug and other supplies would be shipped directly to each patient, with consent, communication, and data collection happening entirely via the internet.

The trial piloted a number of innovations, including some novel and intriguing Patient Reported Outcome (PRO) tools.  Unfortunately, most of these will likely not have been given the benefit of a full test, as the trial was killed due to low patient enrollment.

The fact that a trial designed to enroll less than 300 patients couldn’t meet its enrollment goal is sobering enough, but in this case the pain is even greater due to the fact that the study was not limited to site databases and/or catchment areas.  In theory, anyone with overactive bladder in the entire United States was a potential participant. 

And yet, it didn’t work.  In a previous interview with Pharmalot, Pfizer’s Craig Lipset mentions a number of recruitment channels – he specifically cites Facebook, Google, Patients Like Me, and Inspire, along with other unspecified “online outreach” – that drove “thousands” of impressions and “many” registrations, but these did not amount to, apparently, even close to the required number of consented patients. 

Two major questions come to mind:

1.    How were patients “converted” into the study?  One of the more challenging aspects of patient recruitment is often getting research sites engaged in the process.  Many – perhaps most – patients are understandably on the fence about being in a trial, and the investigator and study coordinator play the single most critical role in helping each patient make their decision. You cannot simply replace their skill and experience with a website (or “multi-media informed consent module”). 

2.    Did they understand the patient funnel?  I am puzzled by the mention of “thousands of hits” to the website.  That may seem like a lot, if you’re not used to engaging patients online, but it’s actually not necessarily so. 
Jakob Nielsen's famous "Lurker Funnel"
seems worth mentioning here...
Despite some of the claims made by patient communities, it is perfectly reasonable to expect that less than 1% of visitors (even somewhat pre-qualified visitors) will end up consenting into the study.  If you’re going to rely on the internet as your sole means of recruitment, you should plan on needing closer to 100,000 visitors (and, critically: negotiate your spending accordingly). 

In the prior interview, Lipset says:
I think some of the staunch advocates for using online and social media for recruitment are still reticent to claim silver bullet status and not use conventional channels in parallel. Even the most aggressive and bullish social media advocates, generally, still acknowledge you’re going to do this in addition to, and not instead of more conventional channels.

This makes Pfizer’s exclusive reliance on these channels all the more puzzling.  If no one is advocating disintermediating the sites and using only social media, then why was this the strategy?

I am confident that someone will try again with this type of trial in the near future.  Hopefully, the Pfizer experience will spur them to invest in building a more rigorous recruitment strategy before they start.

[Update 6/20: Lipset weighed in via the comments section of the Pharmalot article above to clarify that other DTP aspects of the trial were tested and "worked VERY well".  I am not sure how to evaluate that clarification, given the fact that those aspects couldn't have been tested on a very large number of patients, but it is encouraging to hear that more positive experiences may have come out of the study.]