Friday, November 16, 2012

The Accuracy of Patient Reported Diagnoses


Novelist Phillip Roth recently got embroiled in a small spat with the editors of Wikipedia regarding the background inspiration for one of his books.  After a colleague attempted to correct the entry for The Human Stain on Roth's behalf, he received the following reply from a Wikipedia editor:
I understand your point that the author is the greatest authority on their own work, but we require secondary sources.
Report: 0% of decapitees could
accurately recall their diagnosis
The editor's response, as exasperating as it was to Roth, parallels the prevailing beliefs in clinical research about the value and reliability of Patient Reported Outcomes (PROs). On the one hand, who knows the patient better than the patient? On the other hand, our SOPs require expert physician assessment and diagnosis -- we, too, usually require secondary sources.

While recent FDA guidance has helped to solidify our approaches to incorporating PROs into traditionally-structured clinical trials, there are still a number of open questions about how far we can go with relying exclusively on what patients tell us about their medical conditions.  These questions come to the forefront when we consider the potential of "direct to patient" clinical trials, such as the recently-discontinued REMOTE trial from Pfizer, a pilot study that attempted to assess the feasibility of conducting a clinical trial without the use of local physician investigators.

Among other questions, the REMOTE trial forces us to ask: without physician assessment, how do we know the patients we recruit even have the condition being studied? And if we need more detailed medical data, how easy will it be to obtain from their regular physicians? Unfortunately, that study ended due to lack of enrollment, and Pfizer has not been particularly communicative about any lessons learned.

 Luckily for the rest of us, at least one CRO, Quintiles, is taking steps to methodically address and provide data for some of these questions.  They are moving forward with what appears to be a small series of studies that assess the feasibility and accuracy of information collected in the direct-to-patient arena. Their first step is a small pilot study of 50 patients with self-reported gout, conducted by both Quintiles and Outcomes Health Information Services.  The two companies have jointly published their data in the open-access Journal of Medical Internet Research.

(Before getting into the article's content, let me just emphatically state: kudos to the Quintiles and Outcomes teams for submitting their work to peer review, and to publication in an open access journal. Our industry needs much, much more of this kind of collaboration and commitment to transparency.)

The study itself is fairly straightforward: 50 patients were enrolled (out of 1250 US patients who were already in a Quintiles patient database with self-reported gout) and asked to complete an online questionnaire as well as permit access to their medical records.

The twin goals of the study were to assess the feasibility of collecting the patients' existing medical records and to determine the accuracy of the patients' self-reported diagnosis of gout.

To obtain patients' medical records, the study team used a belt-and-suspenders approach: first, the patients provided an electronic release along with their physicians' contact information. Then, a paper release form was also mailed to the patients, to be used as backup if the electronic release was insufficient.

To me, the results from the attempt at obtaining the medical records is actually the most interesting part of the study, since this is going to be an issue in pretty much every DTP trial that's attempted. Although the numbers are obviously quite small, the results are at least mildly encouraging:

  • 38 Charts Received
    • 28 required electronic release only
    • 10 required paper release
  • 12 Charts Not Received
    • 8 no chart mailed in time
    • 2 physician required paper release, patient did not provide
    • 2 physician refused

If the electronic release had been used on its own, 28 charts (56%) would have been available. Adding the suspenders of a follow-up paper form increased the total to respectable 76%. The authors do not mention how aggressively they pursued obtaining the records from physicians, nor how long they waited before giving up, so it's difficult to determine how many of the 8 charts that went past the deadline could also potentially have been recovered.

Of the 38 charts received, 35 (92%) had direct confirmation of a gout diagnosis and 2 had indirect confirmation (a reference to gout medication).  Only 1 chart had no evidence for or against a diagnosis. So it is fair to conclude that these patients were highly reliable, at least insofar as their report of receiving a prior diagnosis of gout was concerned.

In some ways, though, this represents a pretty optimistic case. Most of these patients had been living with gout for many year, and "gout" is a relatively easy thing to remember.  Patients were not asked questions about the type of gout they had or any other details that might have been checked against their records.

The authors note that they "believe [this] to be the first direct-to-patient research study involving collection of patient-reported outcomes data and clinical information extracted from patient medical records." However, I think it's very worthwhile to bring up comparison with this study, published almost 20 years ago in the Annals of the Rheumatic Diseases.  In that (pre-internet) study, researchers mailed a survey to 472 patients who had visited a rheumatology clinic 6 months previously. They were therefore able to match all of the survey responses with an existing medical record, and compare the patients' self-reported diagnoses in much the same way as the current study.  Studying a more complex set of diseases (arthritis), the 1995 paper paints a more complex picture: patient accuracy varied considerably depending on their disease: from very accurate (100% for those suffering from ankylosing spondylitis, 90% for rheumatoid arthritis) to not very exact at all (about 50% for psoriatic and osteo arthritis).

Interestingly, the Quintiles/Outcomes paper references a larger ongoing study in rheumatoid arthritis as well, which may introduce some of the complexity seen in the 1995 research.

Overall, I think this pilot does exactly what it set out to do: it gives us a sense of how patients and physicians will react to this type of research, and helps us better refine approaches for larger-scale investigations. I look forward to hearing more from this team.


ResearchBlogging.org Cascade, E., Marr, P., Winslow, M., Burgess, A., & Nixon, M. (2012). Conducting Research on the Internet: Medical Record Data Integration with Patient-Reported Outcomes Journal of Medical Internet Research, 14 (5) DOI: 10.2196/jmir.2202



Also cited: I Rasooly, et al., Comparison of clinical and self reported diagnosis for rheumatology outpatients, Annals of the Rheumatic Diseases 1995 DOI:10.1136/ard.54.10.850

Image courtesy Flickr user stevekwandotcom.

Friday, October 12, 2012

The "Scandal" of "Untested" Generics


I am in the process of writing up a review of this rather terrible Forbes piece on the FDA recall of one manufacturer's version of generic 300 mg bupropion XL. However, that's going to take a while, so I thought I'd quickly cover just one of the points brought up there, since it seems to be causing a lot of confusion.

Forbes is shocked, SHOCKED to learn that things
 are happening the same way they always have:
call Congress at once!
The FDA’s review of the recall notes that when the generic was approved, only the 150 mg version was tested for bioequivalence in humans. The 300 mg version was approved based upon the 150 mg data as well as detailed information about the manufacturing and composition of both versions.

A number of people expressed surprise about this – they seemed to genuinely not be aware that a drug approval could happen in this way. The Forbes article stated that this was entirely inappropriate and worthy of Congressional investigation.

In fact, many strengths of generic drugs do not undergo in vivo bioequivalence and bioavailability testing as part of their review and approval. This is true in both the US and Europe. Here is a brief rundown of when and why such testing is waived, and why such waivers are neither new, nor shocking, nor unethical.

Title 21, Part 320 of the US Code of Federal Regulations is the regulatory foundation regarding bioequivalence testing in drugs.  Section 22 deals specifically with conditions where human testing should be waived. It is important to note that these regulations aren't new, and the laws that they're based on aren't new either (in fact, the federal law is 20 years old, and was last updated 10 years ago).

By far the most common waiver is for lower dosage strengths. When a drug exists in many approved dosages, generally the highest dose is subject to human bioequivalence testing and the lower doses are approved based on the high-dose results supplemented by in vitro testing.

However, when higher doses carry risks of toxicity, the situation can be reversed, out of ethical concerns for the welfare of test subjects. So, for example, current FDA guidance for amiodarone – a powerful antiarrhythmic drug with lots of side effects – is that the maximum “safe” dose of 200 mg should be tested in humans, and that 100 mg, 300 mg, and 400 mg dosage formulations will be approved if the manufacturer also establishes “acceptable in-vitro dissolution testing of all strengths, and … proportional similarity of the formulations across all strengths”.

That last part is critically important: the generic manufacturer must submit additional evidence about how the doses work in vitro, as well as keep the proportions of inactive ingredients constant. It is this combination of in vivo bioequivalence, in vitro testing, and manufacturing controls that supports a sound scientific decision to approve the generic at various doses.

In fact, certain drugs are so toxic – most chemotherapies, for example – that performing a bioequivalence test in healthy humans in patently unethical. In many of those cases, generic approval is granted on the basis of formulation chemistry alone. For example, generic paclitaxel is waived from human testing (here is a waiver from 2001 – again demonstrating that there’s nothing terribly shocking or new about this process).

In the case of bupropion, FDA had significant concerns about the risk of seizures at the 300 mg dose level. Similar to the amiodarone example above, they issued guidance providing for a waiver of the higher dosage, but only based upon the combination of in vivo data from the 150 mg dose, in vitro testing, and manufacturing controls.

You may not agree with the current system, and there may be room for improvement, but you cannot claim that it is new, unusual, or requiring congressional inquiry. It’s based on federal law, with significant scientific and ethical underpinnings.

Further reading: FDA Guidance for Industry: Bioavailability and Bioequivalence Studies for Orally Administered Drug Products — General Considerations

Thursday, October 11, 2012

TransCelerate and CDISC: The Relationship Explained


Updating my post from last month about the launch announcement for TransCelerate BioPharma, a nonprofit entity funded by 10 large pharmaceutical companies to “bring new medicines to patients faster”: one of the areas I had some concern about was in the new company's move into the “development of clinical data standards”.

How about we transcelerate
this website a bit?
Some much-needed clarification has come by way of Wayne Kubick, the CTO of CDISC. In an article in Applied Clinical Trials, he lays out the relationship in a bit more detail:
TransCelerate has been working closely with CDISC for several months to see how they can help us move more quickly in the development of therapeutic area data standards.  Specifically, they are working to provide CDISC with knowledgeable staff to help us plan for and develop data standards for more than 55 therapeutic areas over the next five years.
And then again:
But the important thing to realize is that TransCelerate intends to help CDISC achieve its mission to develop therapeutic area data standards more rapidly by giving us greater access to skilled volunteers to contribute to standards development projects.   
So we have clarification on at least one point: TransCelerate will donate some level of additional skilled manpower to CDISC-led initiatives.

That’s a good thing, I assume. Kubick doesn't mention it, but I would venture to guess that “more skilled volunteers” is at or near the top of CDISC's wish list.

But it raises the question: why TransCelerate? Couldn't the 10 member companies have contributed this employee time already? Did we really need a new entity to organize a group of fresh volunteers? And if we did somehow need a coordinating entity to make this happen, why not use an existing group – one with, say, a broader level of support across the industry, such as PhRMA?

The promise of a group like TransCelerate is intriguing. The executional challenges, however, are enormous: I think it will be under constant pressure to move away from meaningful but very difficult work towards supporting more symbolic and easy victories.