Sunday, April 24, 2011

Social Networking for Clinical Research

No matter what, negative clinical trial results are sad. We can appreciate, intellectually, that clinical equipoise is important and that negative results are a natural consequence of conducting ethical trials, but it is impossible not to feel disappointed when yet another promising therapy fails to hold up.

However, the negative results published today in Nature Biotechnology on a groundbreaking trial in ALS deserve to be celebrated. The trial was conducted exclusively through PatientsLikeMe, the online medical social network that serves as a forum for patients in all disease areas to “share real-world health experiences.”

According to a very nice write-up in the Wall Street Journal, the trial was conceived and initiated by ALS patients who were part of the PatientsLikeMe ALS site:


Jamie Heywood, chairman and co-founder of PatientsLikeMe, said the idea for the new study came from patients. After the 2008 paper reporting lithium slowed down the disease in 16 ALS patients, some members of the site suggested posting their experiences with the drug in an online spreadsheet to figure out if it was working. PatientsLikeMe offered instead to run a more rigorous observational study with members of the network to increase chances of getting a valid result.
The study included standardized symptom reporting from 596 patients (149 taking lithium and 447 matched controls). After 9 months, the patients taking lithium showed almost no difference in ALS symptoms compared to their controls, and preliminary (negative) results were released in late 2008. Although the trial was not randomized and not blinded – significant methodological issues, to be sure – it is still exciting for a number of reasons.

First, the study was conducted at an incredible rate of speed. Only 9 months elapsed between PatientsLikeMe deploying its tool to users and the release of topline results. In contrast, 2 more traditional, controlled clinical trials that were initiated to verify the first study’s results had not even managed to enroll their first patient during that time. In many cases like this – especially looking at new uses of established, generic drugs – private industry has little incentive to conduct an expensive trial. And academic researchers tend to move a pace that, while not quite glacial, is not as rapid as acutely-suffering patients would like.

(The only concern I have about speed is the time it took to get this paper published. Why was there a 2+ year gap between results and publication?)

Second, this trial represents one of the best uses of “off-label” patient experience that I know of. Many of the physicians I talk to struggle with off-label, patient-initiated treatment: they cannot support it, but it is difficult to argue with a patient when there is so little hard evidence. This trial represents an intelligent path towards tapping into and systematically organizing some of the thousands of individual off-label experiences and producing something clinically useful. As the authors state in the Nature paper:


Positive results from phase 1 and phase 2 trials can lead to changes in patient behavior, particularly when a drug is readily available. [...] The ongoing availability of a surveillance mechanism such as ours might help provide evidence to support or refute self-experimentation.

Ironically, the fact that the trial found no benefit for lithium may have the most far-reaching benefit. A positive trial would have been open to criticism for its inability to compensate for placebo effect. These results run counter to expected placebo effect, lending strong support to the conclusion that it was thoughtfully designed and conducted. I hope this will be immense encouragement to others looking to take this method forward.

A lot has been written over the past 3-4 years about the enormous power of social media to change healthcare as we know it. In general, I have been skeptical of most of these claims, as most of them fail to plausibly explain the connection between "Lots of people on Facebook" and "Improved clinical outcomes". I applaud the patients and staff at PatientsLikeMe for finding a way to work together to break new ground in this area.

Monday, April 11, 2011

Accelerated Approvals are Too Fast, Except When They're Too Slow

A great article in Medscape reports on two unrelated articles on the FDA’s process for granting (and following up on) Accelerated Approvals of oncology drugs.

First, a very solid review of all oncology drugs approved through the accelerated process since 1992 is in the latest journal of the National Cancer Institute. The review, written by FDA personnel, is in general concerned with the slow pace of confirmatory Phase 3 trials – over a third (18 of 47) have not yet been completed, and even the ones that have completed have taken considerable time. The authors consider process changes and fines as viable means for the FDA to encourage timely completion.

Second, over at the New England Journal of Medicine, Dr Bruce Chabner has a perspective piece that looks at the flip side: he argues that some compounds should be considered even earlier for accelerated approval, using the example of Plexxikon’s much-heralded PLX4032, which showed an amazing 80% response rate in Metastatic Melanoma (albeit in a very small sample of 38 patients).

I would argue that we are just now starting to get enough experience to have a very good conversation about accelerated approval and how to improve it -- still, less than 50 data points (47 approved indications) means that we need to remind ourselves that we're still mostly in the land of anecdote. However, it may be time to ask: how much does delay truly cost us in terms of our overall health? What is the cost of delayed approval (how many patients may potentially suffer from lack of access), and correspondingly what is the cost of premature approval and/or delayed confirmation (how many patients are exposed to ineffective and toxic treatments)?

The good news, to me, is that we're finally starting to collect enough information to make a rational estimate of these questions.

Monday, April 4, 2011

Nice WSJ article on p-values

The Wall Street Journal has a brief but useful lay overview of the concept of statistical significance. Without mentioning them by name, it provides accurate synopses of some of the least understood aspects of clinical trial data (the related-but-quite-different concept of clinical significance and the problem of multiplicity). Although ostensibly about the US Supreme Court's refusal to accept statistical significance as a standard for public disclosure of adverse event reports in its recent Matrixx ruling, the article has broad applicability, and I'm always happy to see these concepts clearly articulated.