Friday, November 16, 2012

The Accuracy of Patient Reported Diagnoses

Novelist Phillip Roth recently got embroiled in a small spat with the editors of Wikipedia regarding the background inspiration for one of his books.  After a colleague attempted to correct the entry for The Human Stain on Roth's behalf, he received the following reply from a Wikipedia editor:
I understand your point that the author is the greatest authority on their own work, but we require secondary sources.
Report: 0% of decapitees could
accurately recall their diagnosis
The editor's response, as exasperating as it was to Roth, parallels the prevailing beliefs in clinical research about the value and reliability of Patient Reported Outcomes (PROs). On the one hand, who knows the patient better than the patient? On the other hand, our SOPs require expert physician assessment and diagnosis -- we, too, usually require secondary sources.

While recent FDA guidance has helped to solidify our approaches to incorporating PROs into traditionally-structured clinical trials, there are still a number of open questions about how far we can go with relying exclusively on what patients tell us about their medical conditions.  These questions come to the forefront when we consider the potential of "direct to patient" clinical trials, such as the recently-discontinued REMOTE trial from Pfizer, a pilot study that attempted to assess the feasibility of conducting a clinical trial without the use of local physician investigators.

Among other questions, the REMOTE trial forces us to ask: without physician assessment, how do we know the patients we recruit even have the condition being studied? And if we need more detailed medical data, how easy will it be to obtain from their regular physicians? Unfortunately, that study ended due to lack of enrollment, and Pfizer has not been particularly communicative about any lessons learned.

 Luckily for the rest of us, at least one CRO, Quintiles, is taking steps to methodically address and provide data for some of these questions.  They are moving forward with what appears to be a small series of studies that assess the feasibility and accuracy of information collected in the direct-to-patient arena. Their first step is a small pilot study of 50 patients with self-reported gout, conducted by both Quintiles and Outcomes Health Information Services.  The two companies have jointly published their data in the open-access Journal of Medical Internet Research.

(Before getting into the article's content, let me just emphatically state: kudos to the Quintiles and Outcomes teams for submitting their work to peer review, and to publication in an open access journal. Our industry needs much, much more of this kind of collaboration and commitment to transparency.)

The study itself is fairly straightforward: 50 patients were enrolled (out of 1250 US patients who were already in a Quintiles patient database with self-reported gout) and asked to complete an online questionnaire as well as permit access to their medical records.

The twin goals of the study were to assess the feasibility of collecting the patients' existing medical records and to determine the accuracy of the patients' self-reported diagnosis of gout.

To obtain patients' medical records, the study team used a belt-and-suspenders approach: first, the patients provided an electronic release along with their physicians' contact information. Then, a paper release form was also mailed to the patients, to be used as backup if the electronic release was insufficient.

To me, the results from the attempt at obtaining the medical records is actually the most interesting part of the study, since this is going to be an issue in pretty much every DTP trial that's attempted. Although the numbers are obviously quite small, the results are at least mildly encouraging:

  • 38 Charts Received
    • 28 required electronic release only
    • 10 required paper release
  • 12 Charts Not Received
    • 8 no chart mailed in time
    • 2 physician required paper release, patient did not provide
    • 2 physician refused

If the electronic release had been used on its own, 28 charts (56%) would have been available. Adding the suspenders of a follow-up paper form increased the total to respectable 76%. The authors do not mention how aggressively they pursued obtaining the records from physicians, nor how long they waited before giving up, so it's difficult to determine how many of the 8 charts that went past the deadline could also potentially have been recovered.

Of the 38 charts received, 35 (92%) had direct confirmation of a gout diagnosis and 2 had indirect confirmation (a reference to gout medication).  Only 1 chart had no evidence for or against a diagnosis. So it is fair to conclude that these patients were highly reliable, at least insofar as their report of receiving a prior diagnosis of gout was concerned.

In some ways, though, this represents a pretty optimistic case. Most of these patients had been living with gout for many year, and "gout" is a relatively easy thing to remember.  Patients were not asked questions about the type of gout they had or any other details that might have been checked against their records.

The authors note that they "believe [this] to be the first direct-to-patient research study involving collection of patient-reported outcomes data and clinical information extracted from patient medical records." However, I think it's very worthwhile to bring up comparison with this study, published almost 20 years ago in the Annals of the Rheumatic Diseases.  In that (pre-internet) study, researchers mailed a survey to 472 patients who had visited a rheumatology clinic 6 months previously. They were therefore able to match all of the survey responses with an existing medical record, and compare the patients' self-reported diagnoses in much the same way as the current study.  Studying a more complex set of diseases (arthritis), the 1995 paper paints a more complex picture: patient accuracy varied considerably depending on their disease: from very accurate (100% for those suffering from ankylosing spondylitis, 90% for rheumatoid arthritis) to not very exact at all (about 50% for psoriatic and osteo arthritis).

Interestingly, the Quintiles/Outcomes paper references a larger ongoing study in rheumatoid arthritis as well, which may introduce some of the complexity seen in the 1995 research.

Overall, I think this pilot does exactly what it set out to do: it gives us a sense of how patients and physicians will react to this type of research, and helps us better refine approaches for larger-scale investigations. I look forward to hearing more from this team. Cascade, E., Marr, P., Winslow, M., Burgess, A., & Nixon, M. (2012). Conducting Research on the Internet: Medical Record Data Integration with Patient-Reported Outcomes Journal of Medical Internet Research, 14 (5) DOI: 10.2196/jmir.2202

Also cited: I Rasooly, et al., Comparison of clinical and self reported diagnosis for rheumatology outpatients, Annals of the Rheumatic Diseases 1995 DOI:10.1136/ard.54.10.850

Image courtesy Flickr user stevekwandotcom.


Jonathan said...

Excellently written article, if only all bloggers offered the same level of content as you, the internet would be a much better place. Please keep it up!
Patient Survey

ads dating said...

I really loved reading your blog. It was very well authored and easy to understand. Unlike additional blogs I have read which are really not good.
ads dating

High School Diploma Online said...

Very intresting post here i have got.