Monday, January 14, 2013

Magical Thinking in Clinical Trial Enrollment


The many flavors of wish-based patient recruitment.

[Hopefully-obvious disclosure: I work in the field of clinical trial enrollment.]

When I'm discussing and recommending patient recruitment strategies with prospective clients, there is only one serious competitor I'm working against. I do not tailor my presentations in reaction to what other Patient Recruitment Organizations are saying, because they're not usually the thing that causes me the most problems. In almost all cases, when we lose out on a new study opportunity, we have lost to one opponent:

Need patients? Just add water!
Magical thinking.

Magical thinking comes in many forms, but in clinical trial enrollment it traditionally has two dominant flavors:

  • We won’t have any problems with enrollment because we have made it a priority within our organization.
    (This translates to: "we want it to happen, therefore it has to happen, therefore it will happen", but it doesn't sound quite as convincing that way, does it?)
  • We have selected sites that already have access to a large number of the patients we need.
    (I hear this pretty much 100% of the time. Even from people who understand that every trial is different and that past site performance is simply not a great predictor of future performance.)

A new form of magical thinking burst onto the scene a few years ago: the belief that the Internet will enable us to target and engage exactly the right patients. Specifically, some teams (aided by the, shall we say, less-than-completely-totally-true claims of "expert" vendors) began to believe that the web’s great capacity to narrowly target specific people – through Google search advertising, online patient communities, and general social media activities – would prove more than enough to deliver large numbers of trial participants. And deliver them fast and cheap to boot. Sadly evidence has already started to emerge about the Internet’s failure to be a panacea for slow enrollment. As I and others have pointed out, online recruitment can certainly be cost effective, but cannot be relied on to generate a sizable response. As a sole source, it tends to underdeliver even for small trials.

I think we are now seeing the emergence of the newest flavor of magical thinking: Big Data. Take this quote from recent coverage of the JP Morgan Healthcare Conference:
For instance, Phase II, that ever-vexing rubber-road matchmaker for promising compounds that just might be worthless. Identifying the right patients for the right drug can make or break a Phase II trial, [John] Reynders said, and Big Data can come in handy as investigators distill mountains of imaging results, disease progression readings and genotypic traits to find their target participants. 
The prospect of widespread genetic mapping coupled with the power of Big Data could fundamentally change how biotech does R&D, [Alexis] Borisy said. "Imagine having 1 million cancer patients profiled with data sets available and accessible," he said. "Think how that very large data set might work--imagine its impact on what development looks like. You just look at the database and immediately enroll a trial of ideal patients."
Did you follow the logic of that last sentence? You immediately enroll ideal patients ... and all you had to do was look at a database! Problem solved!

Before you go rushing off to get your company some Big Data, please consider the fact that the overwhelming majority of Phase 2 trials do not have a neat, predefined set of genotypic traits they’re looking to enroll. In fact, narrowly-tailored phase 2 trials (such as recent registration trials of Xalkori and Zelboraf) actually enroll very quickly already, without the need for big databases. The reality for most drugs is exactly the opposite: they enter phase 2 actively looking for signals that will help identify subgroups that benefit from the treatment.

Also, it’s worth pointing out that having a million data points in a database does not mean that you have a million qualified, interested, and nearby patients just waiting to be enrolled in your trial. As recent work in medical record queries bears out, the yield from these databases promises to be low, and there are enormous logistic, regulatory, and personal challenges in identifying, engaging, and consenting the actual human beings represented by the data.

More, even fresher flavors of magical thinking are sure to emerge over time. Our urge to hope that our problems will just be washed away in a wave of cool new technology is just too powerful to resist.

However, when the trial is important, and the costs of delay are high, clinical teams need to set the wishful thinking aside and ask for a thoughtful plan based on hard evidence. Fortunately, that requires no magic bean purchase.

Magic Beans picture courtesy of Flikr user sleepyneko

Thursday, December 20, 2012

All Your Site Are Belong To Us


'Competitive enrollment' is exactly that.

This is a graph I tend to show frequently to my clients – it shows the relative enrollment rates for two groups of sites in a clinical trial we'd been working on. The blue line is the aggregate rate of the 60-odd sites that attended our enrollment workshop, while the green line tracks enrollment for the 30 sites that did not attend the workshop. As a whole, the attendees were better enrollers that the non-attendees, but the performance of both groups was declining.

Happily, the workshop produced an immediate and dramatic increase in the enrollment rate of the sites who participated in it – they not only rebounded, but they began enrolling at a better rate than ever before. Those sites that chose not to attend the workshop became our control group, and showed no change in their performance.

The other day, I wrote about ENACCT's pilot program to improve enrollment. Five oncology research sites participated in an intensive, highly customized program to identify and address the issues that stood in the way of enrolling more patients.  The sites in general were highly enthused about the program, and felt it had a positive impact on the operations.

There was only one problem: enrollment didn't actually increase.

Here’s the data:

This raises an obvious question: how can we reconcile these disparate outcomes?

On the one hand, an intensive, multi-day, customized program showed no improvement in overall enrollment rates at the sites.

On the other, a one-day workshop with sixty sites (which addressed many of the same issues as the ENACCT pilot: communications, study awareness, site workflow, and patient relationships) resulted in and immediate and clear improvement in enrollment.

There are many possible answers to this question, but after a deeper dive into our own site data, I've become convinced that there is one primary driver at work: for all intents and purposes, site enrollment is a zero-sum game. Our workshop increased the accrual of patients into our study, but most of that increase came as a result of decreased enrollments in other studies at our sites.

Our workshop graph shows increased enrollment ... for one study. The ENACCT data is across all studies at each site. It stands to reason that if sites are already operating at or near their maximum capacity, then the only way to improve enrollment for your trial is to get the sites to care more about your trial than about other trials that they’re also participating in.

And that makes sense: many of the strategies and techniques that my team uses to increase enrollment are measurably effective, but there is no reason to believe that they result in permanent, structural changes to the sites we work with. We don’t redesign their internal processes; we simply work hard to make our sites like us and want to work with us, which results in higher enrollment. But only for our trials.

So the next time you see declining enrollment in one of your trials, your best bet is not that the patients have disappeared, but rather that your sites' attention has wandered elsewhere.


Tuesday, December 11, 2012

What (If Anything) Improves Site Enrollment Performance?

ENACCT has released its final report on the outcomes from the National Cancer Clinical Trials Pilot Breakthrough Collaborative (NCCTBC), a pilot program to systematically identify and implement better enrollment practices at five US clinical trial sites. Buried after the glowing testimonials and optimistic assessments is a grim bottom line: the pilot program didn't work.

Here are the monthly clinical trial accruals at each of the 5 sites. The dashed lines mark when the pilots were implemented:



4 of the 5 sites showed no discernible improvement. The one site that did show increasing enrollment appears to have been improving before any of the interventions kicked in.

This is a painful but important result for anyone involved in clinical research today, because the improvements put in place through the NCCTBC process were the product of an intensive, customized approach. Each site had 3 multi-day learning sessions to map out and test specific improvements to their internal communications and processes (a total of 52 hours of workshops). In addition, each site was provided tracking tools and assigned a coach to assist them with specific accrual issues.

That’s an extremely large investment of time and expertise for each site. If the results had been positive, it would have been difficult to project how NCCTBC could be scaled up to work at the thousands of research sites across the country. Unfortunately, we don’t even have that problem: the needle simple did not move.

While ENACCT plans a second round of pilot sites, I think we need to face a more sobering reality: we cannot squeeze more patients out of sites through training and process improvements. It is widely believed in the clinical research industry that sites are low-efficiency bottlenecks in the enrollment process. If we could just "fix" them, the thinking goes – streamline their workflow, improve their motivation – we could quickly improve the speed at which our trials complete. The data from the NCCTBC paints an entirely different picture, though. It shows us that even when we pour large amounts of time and effort into a tailored program of "evidence and practice-based changes", our enrollment ROI may be nonexistent.

I applaud the ENACCT team for this pilot, and especially for sharing the full monthly enrollment totals at each site. This data should cause clinical development teams everywhere to pause and reassess their beliefs about site enrollment performance and how to improve it.