Friday, March 25, 2011

Mind the Gap

Modern clinical trials in the pharmaceutical industry are monuments of rigorous analysis. Trial designs are critically examined and debated extensively during the planning phase – we strain to locate possible sources of bias in advance, and adjust to minimize or compensate for them. We collect enormous quantities of efficacy and safety data using standardized, pre-validated techniques. Finally, a team of statisticians parses the data (adhering, of course, to an already-set-in-stone Statistical Analysis Plan to avoid the perils of post-hoc analysis) … then we turn both the data and the analysis over in their entirety to regulatory authorities, who in turn do all they can to verify that the results are accurate, correctly interpreted, and clinically relevant.

It is ironic, then, that our management of these trials is so casual and driven by personal opinions. We all like to talk a good game about metrics, but after the conversation we lapse back into our old, distinctly un-rigorous, habits. Example of this are everywhere once you start to look for them: one that just caught my eye is from a recent Center Watch article:

Survey: Large sites winning more trials than small

Are large sites—hospitals, academic medical centers—getting all the trials, while smaller sites continue to fight for the work that’s left over?

That’s what results of a recent survey by Clinical Research Site Training (CRST) seem to indicate. The nearly 20-year-old site-training firm surveyed 500 U.S. sites in December 2010, finding that 66% of large sites say they have won more trials in the last three years. Smaller sites weren’t asked specifically, but anecdotally many small and medium-sized sites reported fewer trials in recent years.
Let me repeat that last part again, with emphasis: “Smaller sites weren’t asked specifically, but anecdotally…

At this point, the conversation should stop. Nothing to see here, folks -- we don’t actually have evidence of anything, only a survey data point juxtaposed with someone’s personal impression -- move along.

So what are we to do then? I think there are two clear areas where we collectively need to improve:

1. Know what we don’t know. The sad and simple fact is that there are a lot of things we just don’t have good data on. We need to resist the urge to grasp at straws to fill those knowledge gaps – it leaves the false impression that we’ve learned something.

2. Learn from our own backyard. As I mentioned earlier, good analytic practices are pervasive on the executional side of trials. We need to think more rigorously about our data needs, earlier in the process.

The good news is that we have everything we need to make this a reality – we just need to have a bit of courage to admit the gap (or, occasionally, chasm) of our ignorance on a number of critical issues and develop a thoughtful plan forward.

1 comment:

Robert F. Crocker said...

Before beginning an audit, the auditors should inform the head of their audit office that neither they nor their relatives will influence on the impartiality and objectivity of the audit, on their judgment regarding the taxpayer to be audited GAP Analysis