Monday, April 4, 2011

Nice WSJ article on p-values

The Wall Street Journal has a brief but useful lay overview of the concept of statistical significance. Without mentioning them by name, it provides accurate synopses of some of the least understood aspects of clinical trial data (the related-but-quite-different concept of clinical significance and the problem of multiplicity). Although ostensibly about the US Supreme Court's refusal to accept statistical significance as a standard for public disclosure of adverse event reports in its recent Matrixx ruling, the article has broad applicability, and I'm always happy to see these concepts clearly articulated.

Friday, March 25, 2011

Mind the Gap

Modern clinical trials in the pharmaceutical industry are monuments of rigorous analysis. Trial designs are critically examined and debated extensively during the planning phase – we strain to locate possible sources of bias in advance, and adjust to minimize or compensate for them. We collect enormous quantities of efficacy and safety data using standardized, pre-validated techniques. Finally, a team of statisticians parses the data (adhering, of course, to an already-set-in-stone Statistical Analysis Plan to avoid the perils of post-hoc analysis) … then we turn both the data and the analysis over in their entirety to regulatory authorities, who in turn do all they can to verify that the results are accurate, correctly interpreted, and clinically relevant.

It is ironic, then, that our management of these trials is so casual and driven by personal opinions. We all like to talk a good game about metrics, but after the conversation we lapse back into our old, distinctly un-rigorous, habits. Example of this are everywhere once you start to look for them: one that just caught my eye is from a recent Center Watch article:

Survey: Large sites winning more trials than small

Are large sites—hospitals, academic medical centers—getting all the trials, while smaller sites continue to fight for the work that’s left over?

That’s what results of a recent survey by Clinical Research Site Training (CRST) seem to indicate. The nearly 20-year-old site-training firm surveyed 500 U.S. sites in December 2010, finding that 66% of large sites say they have won more trials in the last three years. Smaller sites weren’t asked specifically, but anecdotally many small and medium-sized sites reported fewer trials in recent years.
Let me repeat that last part again, with emphasis: “Smaller sites weren’t asked specifically, but anecdotally…

At this point, the conversation should stop. Nothing to see here, folks -- we don’t actually have evidence of anything, only a survey data point juxtaposed with someone’s personal impression -- move along.

So what are we to do then? I think there are two clear areas where we collectively need to improve:

1. Know what we don’t know. The sad and simple fact is that there are a lot of things we just don’t have good data on. We need to resist the urge to grasp at straws to fill those knowledge gaps – it leaves the false impression that we’ve learned something.

2. Learn from our own backyard. As I mentioned earlier, good analytic practices are pervasive on the executional side of trials. We need to think more rigorously about our data needs, earlier in the process.

The good news is that we have everything we need to make this a reality – we just need to have a bit of courage to admit the gap (or, occasionally, chasm) of our ignorance on a number of critical issues and develop a thoughtful plan forward.

Thursday, March 24, 2011

People Who Disagree with Me Tend to End Up Being Investigated by the Federal Government

I don’t think this qualifies yet as a trend, but two disturbing announcements came right back to back last week:

First: As you’ve probably heard, KV Pharmaceutical caused quite a stir when they announced the pricing for their old-yet-new drug Makena. In response, Senators Sherrod Brown (D-OH) and Amy Klobuchar (D-MN) sent a letter to the FTC demanding they “initiate a formal investigation into any potential anticompetitive conduct” by KV. In explaining his call for the investigation, Brown notes:

Since KV Pharmaceuticals announced the intended price hike, I called on KV Pharmaceuticals to immediately reconsider their decision, but to this date the company continues to defend this astronomical price increase.

Second: One week after an FDA Advisory Committee voted 13 to 4 to recommend approving Novartis’s COPD drug indacaterol, Public Citizen wrote a letter to the US Office of Human Research Protections requesting the Novartis be investigated for conducting the very trials that supplied the evidence for that vote. The reason? Despite the fact that the FDA requested the trials be placebo controlled, Public Citizen feels that Novartis should not have allowed patients to be on placebo. The letter shows no apparent consideration for the idea that a large number of thoughtful, well-informed people considered the design of these trials and came to the conclusion that they were ethical (not only the FDA, but the independent Institutional Review Boards and Ethics Committees that oversaw each trial). Instead, Public Citizen blithely “look[s] forward to OHRP’s thorough and careful investigation of our allegations.”

The upshot of these two announcements seems to be: “we don’t like what you’re doing, and since we can’t get you to stop, we’ll try to initiate a federal investigation.” Even if neither of these efforts succeed they will still cause the companies involved to spend a significant amount of time and money defending themselves. In fact, maybe that’s the point: neither effort seems like a serious claim that actual laws were broken, but rather just an attempt at intimidation.