Sunday, July 15, 2012

Site Enrollment Performance: A Better View

Pretty much everyone involved in patient recruitment for clinical trials seems to agree that "metrics" are, in some general sense, really really important. The state of the industry, however, is a bit dismal, with very little evidence of effort to communicate data clearly and effectively. Today I’ll focus on the Site Enrollment histogram, a tried-but-not-very-true standby in every trial.

Consider this graphic, showing enrolled patients at each site. It came through on a weekly "Site Newsletter" for a trial I was working on:



I chose this histogram not because it’s particularly bad, but because it’s supremely typical. Don’t get me wrong ... it’s really bad, but the important thing here is that it looks pretty much exactly like every site enrollment histogram in every study I’ve ever worked on.

This is a wasted opportunity. Whether we look at per-site enrollment with internal teams to develop enrollment support plans, or share this data with our sites to inform and motivate them, a good chart is one of the best tools we have. To illustrate this, let’s look at a few examples of better ways to look at the data.

If you really must do a static site histogram, make it as clear and meaningful as possible. 

This chart improves on the standard histogram in a few important ways:


Stateful histo - click to enlarge

  1.  It looks better. This is not a minor point when part of our work is to engage sites and makes them feel like they are part of something important. Actually, this graph is made clearer and more appealing mostly by the removal of useless attributes (extraneous whitespace, background colors, and unhelpful labels).
  2. It adds patient disposition information. Many graphs – like the one at the beginning of this post – are vague about who is being counted. Does "enrolled" include patients currently being screened, or just those randomized? Interpretations will vary from reader to reader. Instead, this chart makes patient status an explicit variable, without adding to the complexity of the presentation. It also provides a bit of information about recent performance, by showing patients who have been consented but not yet fully screened.
  3. It ranks sites by their total contribution to the study, not by the letters in the investigator’s name. And that is one of the main reasons we like to share this information with our sites in the first place.
Find Opportunities for Alternate Visualizations
 
There are many other ways in which essentially the same data can be re-sliced or restructured to underscore particular trends or messages. Here are two that I look at frequently, and often find worth sharing.

Then versus Now

Tornado chart - click to enlarge

This tornado chart is an excellent way of showing site-level enrollment trajectory, with each sites prior (left) and subsequent (right) contributions separated out. This example spotlights activity over the past month, but for slower trials a larger timescale may be more appropriate. Also, how the data is sorted can be critical in the communication: this could have been ranked by total enrollment, but instead sorts first on most-recent screening, clearly showing who’s picked up, who’s dropped off, and who’s remained constant (both good and bad).

This is especially useful when looking at a major event (e.g., pre/post protocol amendment), or where enrollment is expected to have natural fluctuations (e.g., in seasonal conditions).

Net Patient Contribution

In many trials, site activation occurs in a more or less "rolling" fashion, with many sites not starting until later in the enrollment period. This makes simple enrollment histograms downright misleading, as they fail to differentiate sites by the length of time they’ve actually been able to enroll. Reporting enrollment rates (patients per site per month) is one straightforward way of compensating for this, but it has the unfortunate effect of showing extreme (and, most importantly, non-predictive), variance for sites that have not been enrolling for very long.

As a result, I prefer to measure each site in terms of its net contribution to enrollment, compared to what it was expected to do over the time it was open:
Net pt contribution - click to enlarge

To clarify this, consider an example: A study expects sites to screen 1 patient per month. Both Site A and Site B have failed to screen a single patient so far, but Site A has been active for 6 months, whereas Site B has only been active 1 month.

On an enrollment histogram, both sites would show up as tied at 0. However, Site A’s 0 is a lot more problematic – and predictive of future performance – than Site B’s 0. If I compare them to benchmark, then I show how many total screenings each site is below the study’s expectation: Site A is at -6, and Site B is only -1, a much clearer representation of current performance.

This graphic has the added advantage of showing how the study as a whole is doing. Comparing the total volume of positive to negative bars gives the viewer an immediate visceral sense of whether the study is above or below expectations.

The above are just 3 examples – there is a lot more that can be done with this data. What is most important is that we first stop and think about what we’re trying to communicate, and then design clear, informative, and attractive graphics to help us do that.

Friday, July 13, 2012

Friday Flailing: Medical Gasses and the Law of the Excluded Middle

Buried
Aristotle never actually said "principium tertii
exclusi"
, mostly because he didn't speak Latin.
under the mass of attention and conversations surrounding the ACA last week, the FDA Safety and Innovation Act (FDASIA) contained a number of provisions that will have lasting effects in the pharmaceutical industry.  What little notice it did get tended to focus on PDUFA reauthorization and the establishment of fees for new generic drug applications (GDUFA).

(Tangent: other parts of the act are well worth looking into: NORD is happy about new incentives for rare disease research, the SWHR is happy about expanded reporting on sex and race in clinical trials, and antibiotic drug makers are happy about extra periods of market exclusivity.  A very good summary is available on the FDA Law Blog.)

So no one’s paid any attention to the Medical Gasses Safety Act, which formally defines medical gasses and even gives them their own Advisory Committee and user fees (I guess those will be MGUFAs?)

The Act’s opening definition is a bit of an eyebrow-raiser:
(2) The term ‘medical gas’ means a drug that is--
‘(A) manufactured or stored in a liquefied, non-liquefied, or cryogenic state; and
‘(B) is administered as a gas.
I’m clearly missing something here, because as far as I can tell, everything is either liquefied or non-liquefied.   This doesn’t seem to lend a lot of clarity to the definition.  And then, what to make of the third option?  How can there be a third option?  It’s been years since my college logic class, but I still remember the Law of the Excluded Middle – everything is either P or not-P. 

I was going to send an inquiry through to Congressman Leonard Lance (R-NJ), the bill’s original author, but his website regrets to inform me that he is “unable to reply to any email from constituents outside of the district.”

So I will remain trapped in Logical Limbo.  Enjoy your weekend.

Tuesday, July 10, 2012

Why Study Anything When You Already Know Everything?

If you’re a human being, in possession of one working, standard-issue human brain (and, for the remainder of this post, I’m going to assume you are), it is inevitable that you will fall victim to a wide variety of cognitive biases and mistakes.  Many of these biases result in our feeling much more certain about our knowledge of the world than we have any rational grounds for: from the Availability Heuristic, to the Dunning-Kruger Effect, to Confirmation Bias, there is an increasingly-well-documented system of ways in which we (and yes, that even includes you) become overconfident in our own judgment.

Over the years, scientists have developed a number of tools to help us overcome these biases in order to better understand the world.  In the biological sciences, one of our best tools is the randomized controlled trial (RCT).  In fact, randomization helps minimize biases so well that randomized trials have been suggested as a means of developing better governmental policy.

However, RCTs in general require an investment of time and money, and they need to be somewhat narrowly tailored.  As a result, they frequently become the target of people impatient with the process – especially those who perhaps feel themselves exempt from some of the above biases.

A shining example of this impatience-fortified-by-hubris can be
4 out of 5 Hammer Doctors agree:
the world is 98% nail.
found in a recent “Speaking of Medicine” blog post by Dr Trish Greenhalgh, with the mildly chilling title Less Research is Needed.  In it, the author finds a long list of things she feels to be so obvious that additional studies into them would be frivolous.  Among the things the author knows, beyond a doubt, is that patient education does not work, and electronic medical records are inefficient and unhelpful. 

I admit to being slightly in awe of Dr Greenhalgh’s omniscience in these matters. 

In addition to her “we already know the answer to this” argument, she also mixes in a completely different argument, which is more along the lines of “we’ll never know the answer to this”.  Of course, the upshot of that is identical: why bother conducting studies?  For this argument, she cites the example of coronary artery disease: since a large genomic study found only a small association with CAD heritability, Dr Greenhalgh tells us that any studies of different predictive methods is bound to fail and thus not worth the effort (she specifically mentions “genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables” as things she apparently already knows will not add anything to our understanding of CAD). 

As studies grow more global, and as we adapt to massive increases in computer storage and processing ability, I believe we will see an increase in this type of backlash.  And while physicians can generally be relied on to be at the forefront of the demand for more, not less, evidence, it is quite possible that a vocal minority of physicians will adopt this kind of strongly anti-research stance.  Dr Greenhalgh suggests that she is on the side of “thinking” when she opposes studies, but it is difficult to see this as anything more than an attempt to shut down critical inquiry in favor of deference to experts who are presumed to be fully-informed and bias-free. 

It is worthwhile for those of us engaged in trying to understand the world to be aware of these kinds of threats, and to take them seriously.  Dr Greenhalgh writes glowingly of a 10-year moratorium on research – presumably, we will all simply rely on her expertise to answer our important clinical questions.