Showing posts with label site relationship management. Show all posts
Showing posts with label site relationship management. Show all posts

Tuesday, May 23, 2017

REMOTE Redux: DTP trials are still hard

Maybe those pesky sites are good for something after all. 

It's been six years since Pfizer boldly announced the launch of its "clinical trial in a box". The REMOTE trial was designed to be entirely online, and involved no research sites: study information and consent was delivered via the web, and medications and diaries were shipped directly to patients' homes.

Despite the initial fanfare, within a month REMOTE's registration on ClinicalTrials.gov was quietly reduced from 600 to 283. The smaller trial ended not with a bang but a whimper, having randomized only 18 patients in over a year of recruiting.

Still, the allure of direct to patient clinical trials remains strong, due to a confluence of two factors. First, a frenzy of interest in running "patient centric clinical trials". Sponsors are scrambling to show they are doing something – anything – to show they have shifted to a patient-centered mindset. We cannot seem to agree what this means (as a great illustration of this, a recent article in Forbes on "How Patients Are Changing Clinical Trials" contained no specific examples of actual trials that had been changed by patients), but running a trial that directly engages patients wherever they are seems like it could work.

The less-openly-discussed other factor leading to interest in these DIY trials is sponsors' continuing willingness to heap almost all of the blame for slow-moving studies onto their research sites. If it’s all the sites’ fault – the reasoning goes – then cutting them out of the process should result in trials that are both faster and cheaper. (There are reasons to be skeptical about this, as I have discussed in the past, but the desire to drop all those pesky sites is palpable.)

However, while a few proof-of-concept studies have been done, there really doesn't seem to have been another trial to attempt a full-blown direct-to-patient clinical trial. Other pilots have been more successful, but had fairly lightweight protocols. For all its problems, REMOTE was a seriously ambitious project that attempted to package a full-blown interventional clinical trial, not an observational study.

In this context, it's great to see published results of the TAPIR Trial in vasculitis, which as far as I can tell is the first real attempt to run a DIY trial of a similar magnitude to REMOTE.

TAPIR was actually two parallel trials, identical in every respect except for their sites: one trial used a traditional group of 8 sites, while the other was virtual and recruited patients from anywhere in the country. So this was a real-time, head-to-head assessment of site performance.

And the results after a full two years of active enrollment?
  • Traditional sites: 49 enrolled
  • Patient centric: 10 enrolled
Even though we’re six years later, and online/mobile communications are even more ubiquitous, we still see the exact same struggle to enroll patients.

Maybe it’s time to stop blaming the sites? To be fair, they didn’t exactly set the world on fire – and I’m guessing the total cost of activating the 8 sites significantly exceeded the costs of setting up the virtual recruitment and patient logistics. But still, the site-less, “patient centric” approach once again came up astonishingly short.


ResearchBlogging.org Krischer J, Cronholm PF, Burroughs C, McAlear CA, Borchin R, Easley E, Davis T, Kullman J, Carette S, Khalidi N, Koening C, Langford CA, Monach P, Moreland L, Pagnoux C, Specks U, Sreih AG, Ytterberg S, Merkel PA, & Vasculitis Clinical Research Consortium. (2017). Experience With Direct-to-Patient Recruitment for Enrollment Into a Clinical Trial in a Rare Disease: A Web-Based Study. Journal of medical Internet research, 19 (2) PMID: 28246067

Tuesday, March 18, 2014

These Words Have (Temporarily) Relocated

Near the end of last year, I had the bright idea of starting a second blog, Placebo Lead-In, to capture a lot of smaller items that I found interesting but wasn't going to work up into a full-blown, 1000 word post.

According to Murphy’s Law, or the Law of Unintended Consequences, or the Law of Biting Off More Than You Can Chew, or some such similar iron rule of the universe, what happened next should have been predictable.

First, my team at CAHG Trials launched a new blog, First Patient In. FPI is dedicated to an open discussion of patient recruitment ideas, and I’m extremely proud of what we've published so far.

Next, I was invited to be a guest blogger for the upcoming Partnerships in Clinical Trials Conference.

Suddenly, I've gone from 1 blog to 4. And while my writing output appears to have increased, it definitely hasn't quadrupled. So this blog has been quiet for a bit too long as a result.

The good news is that the situation is temporary - Partnerships will actually happen at the end of this month. (If you’re going: drop me a line and let’s meet. If you’re not: you really should come and join us!) My contributions to FPI will settle into a monthly post, as I have a fascinating and clever team to handle most of the content.

In case you've missed it, then, here is a brief summary of my posts elsewhere over the past 2 months.

First Patient In


Partnerships in Clinical Trials



Please take a look, and I will see you back here soon.

[Photo credit: detour sign via Flikr user crossley]

Wednesday, February 27, 2013

It's Not Them, It's You

Are competing trials slowing yours down? Probably not.

If they don't like your trial, EVERYTHING ELSE IN
THE WORLD is competition for their attention.
Rahlyn Gossen has a provocative new blog post up on her website entitled "The Patient Recruitment Secret". In it, she makes a strong case for considering site commitment to a trial – in the form of their investment of time, effort, and interest – to be the single largest driver of patient enrollment.

The reasoning behind this idea is clear and quite persuasive:
Every clinical trial that is not yours is a competing clinical trial. 
Clinical research sites have finite resources. And with research sites being asked to take on more and more duties, those resources are only getting more strained. Here’s what this reality means for patient enrollment. 
If research site staff are working on other clinical trials, they are not working on your clinical trial. Nor are they working on patient recruitment for your clinical trial. To excel at patient enrollment, you need to maximize the time and energy that sites spend recruiting patients for your clinical trial.
Much of this fits together very nicely with a point I raised in a post a few months ago, showing that improvements in site enrollment performance may often be made at the expense of other trials.

However, I would add a qualifier to these discussions: the number of active "competing" trials at a site is not a reliable predictor of enrollment performance. In other words, selecting sites who are not working on a lot of other trials will in no way improve enrollment in your trial.

This is an important point because, as Gossen points out, asking the number of other studies is a standard habit of sponsors and CROs on site feasibility questionnaires. In fact, many sponsors can get very hung up on competing trials – to the point of excluding potentially good sites that they feel are working on too many other things.

This came to a head recently when we were brought in to consult on a study experiencing significant enrollment difficulty. The sponsor was very concerned about competing trials at the sites – there was a belief that such competition was a big contributor to sluggish enrollment.

As part of our analysis, we collected updated information on competitive trials. Given the staggered nature of the trial's startup, we then calculated time-adjusted Net Patient Contributions for each site (for more information on that, see my write-up here).

We then cross-referenced competing trials to enrollment performance. The results were very surprising: the quantity of other trials had no effect on how the sites were doing.  Here's the data:

Each site's enrollment performance as it relates to number of other trials it's running.
Competitive trials do not appear to substantially impact rates of enrollment.
 Each site is a point. Good sites (higher up) and poor enrollers (lower) are virtually identical in terms of how many concurrent trials they were running.

Since running into this result, I've looked at the relationship between the number of competing trials in CRO feasibility questionnaires and final site enrollment for many of the trials we've worked on. In each case, the "competing" trials did not serve as even a weak predictor of eventual site performance.

I agree with Gossen's fundamental point that a site's interest and enthusiasm for your trial will help increase enrollment at that site. However, we need to do a better job of thinking about the best ways of measuring that interest to understand the magnitude of the effect that it truly has. And, even more importantly, we have to avoid reliance on substandard proxy measurements such as "number of competing trials", because those will steer us wrong in site selection. In fact, almost everything we tend to collect on feasibility questionnaires appears to be non-predictive and potentially misleading; but that's a post for another day.

[Image credit: research distractions courtesy of Flikr user ronocdh.]

Tuesday, January 15, 2013

Holding Your Breath Also Might Work

Here's a fitting postscript to yesterday's article about wishful-thinking-based enrollment strategies: we received a note from a research site this morning. The site had opted out of my company's comprehensive recruitment campaign, telling the sponsor they preferred to recruit patients their own way.

Here's the latest update from the coordinator:
I've found one person and have called a couple of times, but no return calls.  I will be sending this potential patient a letter this week.  I'm keeping my fingers crossed in finding someone soon!
They don't want to participate in a broad internet/broadcast/advocacy group program, but it's OK -- they have their fingers crossed!

Monday, January 14, 2013

Magical Thinking in Clinical Trial Enrollment


The many flavors of wish-based patient recruitment.

[Hopefully-obvious disclosure: I work in the field of clinical trial enrollment.]

When I'm discussing and recommending patient recruitment strategies with prospective clients, there is only one serious competitor I'm working against. I do not tailor my presentations in reaction to what other Patient Recruitment Organizations are saying, because they're not usually the thing that causes me the most problems. In almost all cases, when we lose out on a new study opportunity, we have lost to one opponent:

Need patients? Just add water!
Magical thinking.

Magical thinking comes in many forms, but in clinical trial enrollment it traditionally has two dominant flavors:

  • We won’t have any problems with enrollment because we have made it a priority within our organization.
    (This translates to: "we want it to happen, therefore it has to happen, therefore it will happen", but it doesn't sound quite as convincing that way, does it?)
  • We have selected sites that already have access to a large number of the patients we need.
    (I hear this pretty much 100% of the time. Even from people who understand that every trial is different and that past site performance is simply not a great predictor of future performance.)

A new form of magical thinking burst onto the scene a few years ago: the belief that the Internet will enable us to target and engage exactly the right patients. Specifically, some teams (aided by the, shall we say, less-than-completely-totally-true claims of "expert" vendors) began to believe that the web’s great capacity to narrowly target specific people – through Google search advertising, online patient communities, and general social media activities – would prove more than enough to deliver large numbers of trial participants. And deliver them fast and cheap to boot. Sadly evidence has already started to emerge about the Internet’s failure to be a panacea for slow enrollment. As I and others have pointed out, online recruitment can certainly be cost effective, but cannot be relied on to generate a sizable response. As a sole source, it tends to underdeliver even for small trials.

I think we are now seeing the emergence of the newest flavor of magical thinking: Big Data. Take this quote from recent coverage of the JP Morgan Healthcare Conference:
For instance, Phase II, that ever-vexing rubber-road matchmaker for promising compounds that just might be worthless. Identifying the right patients for the right drug can make or break a Phase II trial, [John] Reynders said, and Big Data can come in handy as investigators distill mountains of imaging results, disease progression readings and genotypic traits to find their target participants. 
The prospect of widespread genetic mapping coupled with the power of Big Data could fundamentally change how biotech does R&D, [Alexis] Borisy said. "Imagine having 1 million cancer patients profiled with data sets available and accessible," he said. "Think how that very large data set might work--imagine its impact on what development looks like. You just look at the database and immediately enroll a trial of ideal patients."
Did you follow the logic of that last sentence? You immediately enroll ideal patients ... and all you had to do was look at a database! Problem solved!

Before you go rushing off to get your company some Big Data, please consider the fact that the overwhelming majority of Phase 2 trials do not have a neat, predefined set of genotypic traits they’re looking to enroll. In fact, narrowly-tailored phase 2 trials (such as recent registration trials of Xalkori and Zelboraf) actually enroll very quickly already, without the need for big databases. The reality for most drugs is exactly the opposite: they enter phase 2 actively looking for signals that will help identify subgroups that benefit from the treatment.

Also, it’s worth pointing out that having a million data points in a database does not mean that you have a million qualified, interested, and nearby patients just waiting to be enrolled in your trial. As recent work in medical record queries bears out, the yield from these databases promises to be low, and there are enormous logistic, regulatory, and personal challenges in identifying, engaging, and consenting the actual human beings represented by the data.

More, even fresher flavors of magical thinking are sure to emerge over time. Our urge to hope that our problems will just be washed away in a wave of cool new technology is just too powerful to resist.

However, when the trial is important, and the costs of delay are high, clinical teams need to set the wishful thinking aside and ask for a thoughtful plan based on hard evidence. Fortunately, that requires no magic bean purchase.

Magic Beans picture courtesy of Flikr user sleepyneko

Thursday, December 20, 2012

All Your Site Are Belong To Us


'Competitive enrollment' is exactly that.

This is a graph I tend to show frequently to my clients – it shows the relative enrollment rates for two groups of sites in a clinical trial we'd been working on. The blue line is the aggregate rate of the 60-odd sites that attended our enrollment workshop, while the green line tracks enrollment for the 30 sites that did not attend the workshop. As a whole, the attendees were better enrollers that the non-attendees, but the performance of both groups was declining.

Happily, the workshop produced an immediate and dramatic increase in the enrollment rate of the sites who participated in it – they not only rebounded, but they began enrolling at a better rate than ever before. Those sites that chose not to attend the workshop became our control group, and showed no change in their performance.

The other day, I wrote about ENACCT's pilot program to improve enrollment. Five oncology research sites participated in an intensive, highly customized program to identify and address the issues that stood in the way of enrolling more patients.  The sites in general were highly enthused about the program, and felt it had a positive impact on the operations.

There was only one problem: enrollment didn't actually increase.

Here’s the data:

This raises an obvious question: how can we reconcile these disparate outcomes?

On the one hand, an intensive, multi-day, customized program showed no improvement in overall enrollment rates at the sites.

On the other, a one-day workshop with sixty sites (which addressed many of the same issues as the ENACCT pilot: communications, study awareness, site workflow, and patient relationships) resulted in and immediate and clear improvement in enrollment.

There are many possible answers to this question, but after a deeper dive into our own site data, I've become convinced that there is one primary driver at work: for all intents and purposes, site enrollment is a zero-sum game. Our workshop increased the accrual of patients into our study, but most of that increase came as a result of decreased enrollments in other studies at our sites.

Our workshop graph shows increased enrollment ... for one study. The ENACCT data is across all studies at each site. It stands to reason that if sites are already operating at or near their maximum capacity, then the only way to improve enrollment for your trial is to get the sites to care more about your trial than about other trials that they’re also participating in.

And that makes sense: many of the strategies and techniques that my team uses to increase enrollment are measurably effective, but there is no reason to believe that they result in permanent, structural changes to the sites we work with. We don’t redesign their internal processes; we simply work hard to make our sites like us and want to work with us, which results in higher enrollment. But only for our trials.

So the next time you see declining enrollment in one of your trials, your best bet is not that the patients have disappeared, but rather that your sites' attention has wandered elsewhere.


Tuesday, December 11, 2012

What (If Anything) Improves Site Enrollment Performance?

ENACCT has released its final report on the outcomes from the National Cancer Clinical Trials Pilot Breakthrough Collaborative (NCCTBC), a pilot program to systematically identify and implement better enrollment practices at five US clinical trial sites. Buried after the glowing testimonials and optimistic assessments is a grim bottom line: the pilot program didn't work.

Here are the monthly clinical trial accruals at each of the 5 sites. The dashed lines mark when the pilots were implemented:



4 of the 5 sites showed no discernible improvement. The one site that did show increasing enrollment appears to have been improving before any of the interventions kicked in.

This is a painful but important result for anyone involved in clinical research today, because the improvements put in place through the NCCTBC process were the product of an intensive, customized approach. Each site had 3 multi-day learning sessions to map out and test specific improvements to their internal communications and processes (a total of 52 hours of workshops). In addition, each site was provided tracking tools and assigned a coach to assist them with specific accrual issues.

That’s an extremely large investment of time and expertise for each site. If the results had been positive, it would have been difficult to project how NCCTBC could be scaled up to work at the thousands of research sites across the country. Unfortunately, we don’t even have that problem: the needle simple did not move.

While ENACCT plans a second round of pilot sites, I think we need to face a more sobering reality: we cannot squeeze more patients out of sites through training and process improvements. It is widely believed in the clinical research industry that sites are low-efficiency bottlenecks in the enrollment process. If we could just "fix" them, the thinking goes – streamline their workflow, improve their motivation – we could quickly improve the speed at which our trials complete. The data from the NCCTBC paints an entirely different picture, though. It shows us that even when we pour large amounts of time and effort into a tailored program of "evidence and practice-based changes", our enrollment ROI may be nonexistent.

I applaud the ENACCT team for this pilot, and especially for sharing the full monthly enrollment totals at each site. This data should cause clinical development teams everywhere to pause and reassess their beliefs about site enrollment performance and how to improve it.

Tuesday, September 25, 2012

What We Can Anticipate from TransCelerate


TransCelerate: Pharma's great kumbaya moment?
Last week, 10 of the largest pharmaceutical companies caused quite a hullaballoo in the research world with their announcement that they were anteing up to form a new nonprofit entity “to identify and solve common drug development challenges with the end goals of improving the quality of clinical studies and bringing new medicines to patients faster”. The somewhat-awkwardly-named TransCelerate BioPharma immediately got an enthusiastic reception from industry watchers and participants, mainly due to the perception that it was well poised to attack some of the systemic causes of delays and cost overruns that plague clinical trials today.

I myself was caught up in the breathless excitement of the moment, immediately tweeting after reading the initial report:

 Over the past few days, though, I've had time to re-read and think more about the launch announcement, and dial down my enthusiasm considerably.  I still think it’s a worthwhile effort, but it’s probably not fair to expect anything that fundamentally changes much in the way of current trial execution.

Mostly, I’m surprised by the specific goals selected, which seem for the most part either tangential to the real issues in modern drug development or stepping into areas where an all-big-pharma committee isn’t the best tool for the job. I’m also very concerned that a consortium like this would launch without a clearly-articulated vision of how it fits in with, and adds to, the ongoing work of other key players – the press release is loaded with positive, but extremely vague, wording about how TransCelerate will work with, but be different from, groups such as the CTTI and CDISC. The new organization also appears to have no formal relationship with any CRO organizations.  Given the crucial and deeply embedded nature of CROs in today’s research, this is not a detail to be worked out later; it is a vital necessity if any worthwhile progress is to be made.

Regarding the group’s goals, here is what their PR had to say:
Five projects have been selected by the group for funding and development, including: development of a shared user interface for investigator site portals, mutual recognition of study site qualification and training, development of risk-based site monitoring approach and standards, development of clinical data standards, and establishment of a comparator drug supply model.
Let’s take these five projects one by one, to try to get a better picture of TransCelerate’s potential impact:

1. Development of a shared user interface for investigator site portals

Depending on how it’s implemented, the impact of this could range from “mildly useful” to “mildly irksome”. Sure, I hear investigators and coordinators complain frequently about all the different accounts they have to keep track of, so having a single front door to multiple sponsor sites would be a relief. However, I don’t think that the problem of too many usernames cracks anyone’s “top 20 things wrong with clinical trial execution” list – it’s a trivial detail. Aggravating, but trivial.

Worse, if you do it wrong and develop a clunky interface, you’ll get a lot more grumbling about making life harder at the research site. And I think there’s a high risk of that, given that this is in effect software development by committee – and the committee is a bunch of companies that do not actually specialize in software development.

In reality, the best answer to this is probably a lot simpler than we imagine: if we had a neutral, independent body (such as the ACRP) set up a single sign-on (SSO) registry for investigators and coordinators, then all sponsors, CROs, and IVRS/IWRS/CDMS can simply set themselves up as service providers. (This works in the same way that many people today can log into disparate websites using their existing Google or Facebook accounts.)  TransCelerate might do better sponsoring and promoting an external standard than trying to develop an entirely new platform of its own.

2. Mutual recognition of study site qualification and training

This is an excellent step forward. It’s also squarely in the realm of “ideas so obvious we could have done them 10 years ago”. Forcing site personnel to attend multiple iterations of the same training seminars simply to ensure that you’ve collected enough binders full of completion certificates is a sad CYA exercise with no practical benefit to anyone.

This will hopefully re-establish some goodwill with investigators. However, it’s important to note that it’s pretty much a symbolic act in terms of efficiency and cost savings. Nothing wrong with that – heaven knows we need some relationship wins with our increasingly-disillusioned sites – but let’s not go crazy thinking that the represents a real cause of wasted time or money. In fact, it’s pretty clear that one of the reasons we’ve lived with the current site-unfriendly system for so long is that it didn’t really cost us anything to do so.

(It’s also worth pointing out that more than a few biotechs have already figured out, usually with CRO help, how to ensure that site personnel are properly trained and qualified without subjecting them to additional rounds of training.)

3. Development of risk-based site monitoring approach and standards

The consensus belief and hope is that risk-based monitoring is the future of clinical trials. Ever since FDA’s draft guidance on the topic hit the street last year, it’s been front and center at every industry event. It will, unquestionably, lead to cost savings (although some of those savings will hopefully be reinvested into more extensive centralized monitoring).  It will not necessarily shave a significant amount of time off the trials, since in many trials getting monitors out to sites to do SDV is not a rate-limiting factor, but it should still at the very least result in better data at lower cost, and that’s clearly a good thing.

So, the big question for me is: if we’re all moving in this direction already, do we need a new, pharma-only consortium to develop an “approach” to risk-based monitoring?

 First and foremost, this is a senseless conversation to have without the active involvement and leadership of CROs: in many cases, they understand the front-line issues in data verification and management far better than their pharma clients.  The fact that TransCelerate launched without a clear relationship with CROs and database management vendors is a troubling sign that it isn’t poised to make a true contribution to this area.

In a worst-case scenario, TransCelerate may actually delay adoption of risk-based monitoring among its member companies, as they may decide to hold off on implementation until standards have been drafted, circulated, vetted, re-drafted, and (presumably, eventually) approved by all 10 companies. And it will probably turn out that the approaches used will need to vary by patient risk and therapeutic area anyway, making a common, generic approach less than useful.

Finally, the notion that monitoring approaches require some kind of industry-wide “standardization” is extremely debatable. Normally, we work to standardize processes when we run into a lot of practical interoperability issues – that’s why we all have the same electric outlets in our homes, but not necessarily the same AC adaptors for our small devices.  It would be nice if all cell phone manufacturers could agree on a common standard plug, but the total savings from that standard would be small compared to the costs of defining and implementing it.  That’s the same with monitoring: each sponsor and each CRO have a slightly different flavor of monitoring, but the costs of adapting to any one approach for any given trial are really quite small.

Risk-based monitoring is great. If TransCelerate gets some of the credit for its eventual adoption, that’s fine, but I think the adoption is happening anyway, and TransCelerate may not be much help in reality.

4. Development of clinical data standards

This is by far the most baffling inclusion in this list. What happened to CDISC? What is CDISC not doing right that TransCelerate could possibly improve?

In an interview with Matthew Herper at Forbes, TransCelerate’s Interim CEO expands a bit on this point:
“Why do some [companies] record that male is a 0 and female is a 1, and others use 1 and 0, and others use M and F. Where is there any competitive advantage to doing that?” says Neil. “We do 38% of the clinical trials but 70% of the [spending on them]. IF we were to come together and try to define some of these standards it would be an enabler for efficiencies for everyone.”
It’s really worth noting that the first part of that quote has nothing to do with the second part. If I could wave a magic wand and instantly standardize all companies’ gender reporting, I would not have reduced clinical trial expenditures by 0.01%. Even if we extend this to lots of other data elements, we’re still not talking about a significant source of costs or time.

Here’s another way of looking at it: those companies that are conducting the other 62% of trials but are only responsible for 30% of the spending – how did they do it, since they certainly haven’t gotten together to agree on a standard format for gender coding?

But the main problem here is that TransCelerate is encroaching on the work of a respected, popular, and useful initiative – CDISC – without clearly explaining how it will complement and assist that initiative. Neil’s quote almost seems to suggest that he plans on supplanting CDISC altogether.  I don’t think that was the intent, but there’s no rational reason to expect TransCelerate to offer substantive improvement in this area, either.

5. Establishment of a comparator drug supply model

This is an area that I don’t have much direct experience in, so it’s difficult to estimate what impact TransCelerate will have. I can say, anecdotally, that over the past 10 years, exactly zero clinical trials I’ve been involved with have had significant issues with comparator drug supply. But, admittedly, that’s quite possibly a very unrepresentative sample of pharmaceutical clinical trials.

I would certainly be curious to hear some opinions about this project. I assume it’s a somewhat larger problem in Europe than in the US, given both their multiple jurisdictions and their stronger aversion to placebo control. I really can’t imagine that inefficiencies in acquiring comparator drugs (most of which are generic, and so not directly produced by TransCelerate’s members) represent a major opportunity to save time and money.

Conclusion

It’s important to note that everything above is based on very limited information at this point. The transcelerate.com website is still “under construction”, so I am only reacting to the press release and accompanying quotes. However, it is difficult to imagine at this point that TransCelerate’s current agenda will have more than an extremely modest impact on current clinical trials.  At best, it appears that it may identify some areas to cut some costs, though this is mostly through the adoption of risk-based monitoring, which should happen whether TransCelerate exists or not.

I’ll remain a fan of TransCelerate, and will follow its progress with great interest in the hopes that it outperforms my expectations. However, it would do us all well to recognize that TransCelerate probably isn’t going to change things very dramatically -- the many systemic problems that add to the time and cost of clinical trials today will still be with us, and we need to continue to work hard to find better paths forward.

[Update 10-Oct-2012: Wayne Kubick, the CTO of CDISC, has posted a response with some additional details around cooperation between TransCelerate and CDISC around point 4 above.]

Mayday! Mayday! Photo credit: "Wheatley Maypole Dance 2008" from flikr user net_efekt.

Sunday, July 15, 2012

Site Enrollment Performance: A Better View

Pretty much everyone involved in patient recruitment for clinical trials seems to agree that "metrics" are, in some general sense, really really important. The state of the industry, however, is a bit dismal, with very little evidence of effort to communicate data clearly and effectively. Today I’ll focus on the Site Enrollment histogram, a tried-but-not-very-true standby in every trial.

Consider this graphic, showing enrolled patients at each site. It came through on a weekly "Site Newsletter" for a trial I was working on:



I chose this histogram not because it’s particularly bad, but because it’s supremely typical. Don’t get me wrong ... it’s really bad, but the important thing here is that it looks pretty much exactly like every site enrollment histogram in every study I’ve ever worked on.

This is a wasted opportunity. Whether we look at per-site enrollment with internal teams to develop enrollment support plans, or share this data with our sites to inform and motivate them, a good chart is one of the best tools we have. To illustrate this, let’s look at a few examples of better ways to look at the data.

If you really must do a static site histogram, make it as clear and meaningful as possible. 

This chart improves on the standard histogram in a few important ways:


Stateful histo - click to enlarge

  1.  It looks better. This is not a minor point when part of our work is to engage sites and makes them feel like they are part of something important. Actually, this graph is made clearer and more appealing mostly by the removal of useless attributes (extraneous whitespace, background colors, and unhelpful labels).
  2. It adds patient disposition information. Many graphs – like the one at the beginning of this post – are vague about who is being counted. Does "enrolled" include patients currently being screened, or just those randomized? Interpretations will vary from reader to reader. Instead, this chart makes patient status an explicit variable, without adding to the complexity of the presentation. It also provides a bit of information about recent performance, by showing patients who have been consented but not yet fully screened.
  3. It ranks sites by their total contribution to the study, not by the letters in the investigator’s name. And that is one of the main reasons we like to share this information with our sites in the first place.
Find Opportunities for Alternate Visualizations
 
There are many other ways in which essentially the same data can be re-sliced or restructured to underscore particular trends or messages. Here are two that I look at frequently, and often find worth sharing.

Then versus Now

Tornado chart - click to enlarge

This tornado chart is an excellent way of showing site-level enrollment trajectory, with each sites prior (left) and subsequent (right) contributions separated out. This example spotlights activity over the past month, but for slower trials a larger timescale may be more appropriate. Also, how the data is sorted can be critical in the communication: this could have been ranked by total enrollment, but instead sorts first on most-recent screening, clearly showing who’s picked up, who’s dropped off, and who’s remained constant (both good and bad).

This is especially useful when looking at a major event (e.g., pre/post protocol amendment), or where enrollment is expected to have natural fluctuations (e.g., in seasonal conditions).

Net Patient Contribution

In many trials, site activation occurs in a more or less "rolling" fashion, with many sites not starting until later in the enrollment period. This makes simple enrollment histograms downright misleading, as they fail to differentiate sites by the length of time they’ve actually been able to enroll. Reporting enrollment rates (patients per site per month) is one straightforward way of compensating for this, but it has the unfortunate effect of showing extreme (and, most importantly, non-predictive), variance for sites that have not been enrolling for very long.

As a result, I prefer to measure each site in terms of its net contribution to enrollment, compared to what it was expected to do over the time it was open:
Net pt contribution - click to enlarge

To clarify this, consider an example: A study expects sites to screen 1 patient per month. Both Site A and Site B have failed to screen a single patient so far, but Site A has been active for 6 months, whereas Site B has only been active 1 month.

On an enrollment histogram, both sites would show up as tied at 0. However, Site A’s 0 is a lot more problematic – and predictive of future performance – than Site B’s 0. If I compare them to benchmark, then I show how many total screenings each site is below the study’s expectation: Site A is at -6, and Site B is only -1, a much clearer representation of current performance.

This graphic has the added advantage of showing how the study as a whole is doing. Comparing the total volume of positive to negative bars gives the viewer an immediate visceral sense of whether the study is above or below expectations.

The above are just 3 examples – there is a lot more that can be done with this data. What is most important is that we first stop and think about what we’re trying to communicate, and then design clear, informative, and attractive graphics to help us do that.