Sunday, July 15, 2012

Site Enrollment Performance: A Better View

Pretty much everyone involved in patient recruitment for clinical trials seems to agree that "metrics" are, in some general sense, really really important. The state of the industry, however, is a bit dismal, with very little evidence of effort to communicate data clearly and effectively. Today I’ll focus on the Site Enrollment histogram, a tried-but-not-very-true standby in every trial.

Consider this graphic, showing enrolled patients at each site. It came through on a weekly "Site Newsletter" for a trial I was working on:



I chose this histogram not because it’s particularly bad, but because it’s supremely typical. Don’t get me wrong ... it’s really bad, but the important thing here is that it looks pretty much exactly like every site enrollment histogram in every study I’ve ever worked on.

This is a wasted opportunity. Whether we look at per-site enrollment with internal teams to develop enrollment support plans, or share this data with our sites to inform and motivate them, a good chart is one of the best tools we have. To illustrate this, let’s look at a few examples of better ways to look at the data.

If you really must do a static site histogram, make it as clear and meaningful as possible. 

This chart improves on the standard histogram in a few important ways:


Stateful histo - click to enlarge

  1.  It looks better. This is not a minor point when part of our work is to engage sites and makes them feel like they are part of something important. Actually, this graph is made clearer and more appealing mostly by the removal of useless attributes (extraneous whitespace, background colors, and unhelpful labels).
  2. It adds patient disposition information. Many graphs – like the one at the beginning of this post – are vague about who is being counted. Does "enrolled" include patients currently being screened, or just those randomized? Interpretations will vary from reader to reader. Instead, this chart makes patient status an explicit variable, without adding to the complexity of the presentation. It also provides a bit of information about recent performance, by showing patients who have been consented but not yet fully screened.
  3. It ranks sites by their total contribution to the study, not by the letters in the investigator’s name. And that is one of the main reasons we like to share this information with our sites in the first place.
Find Opportunities for Alternate Visualizations
 
There are many other ways in which essentially the same data can be re-sliced or restructured to underscore particular trends or messages. Here are two that I look at frequently, and often find worth sharing.

Then versus Now

Tornado chart - click to enlarge

This tornado chart is an excellent way of showing site-level enrollment trajectory, with each sites prior (left) and subsequent (right) contributions separated out. This example spotlights activity over the past month, but for slower trials a larger timescale may be more appropriate. Also, how the data is sorted can be critical in the communication: this could have been ranked by total enrollment, but instead sorts first on most-recent screening, clearly showing who’s picked up, who’s dropped off, and who’s remained constant (both good and bad).

This is especially useful when looking at a major event (e.g., pre/post protocol amendment), or where enrollment is expected to have natural fluctuations (e.g., in seasonal conditions).

Net Patient Contribution

In many trials, site activation occurs in a more or less "rolling" fashion, with many sites not starting until later in the enrollment period. This makes simple enrollment histograms downright misleading, as they fail to differentiate sites by the length of time they’ve actually been able to enroll. Reporting enrollment rates (patients per site per month) is one straightforward way of compensating for this, but it has the unfortunate effect of showing extreme (and, most importantly, non-predictive), variance for sites that have not been enrolling for very long.

As a result, I prefer to measure each site in terms of its net contribution to enrollment, compared to what it was expected to do over the time it was open:
Net pt contribution - click to enlarge

To clarify this, consider an example: A study expects sites to screen 1 patient per month. Both Site A and Site B have failed to screen a single patient so far, but Site A has been active for 6 months, whereas Site B has only been active 1 month.

On an enrollment histogram, both sites would show up as tied at 0. However, Site A’s 0 is a lot more problematic – and predictive of future performance – than Site B’s 0. If I compare them to benchmark, then I show how many total screenings each site is below the study’s expectation: Site A is at -6, and Site B is only -1, a much clearer representation of current performance.

This graphic has the added advantage of showing how the study as a whole is doing. Comparing the total volume of positive to negative bars gives the viewer an immediate visceral sense of whether the study is above or below expectations.

The above are just 3 examples – there is a lot more that can be done with this data. What is most important is that we first stop and think about what we’re trying to communicate, and then design clear, informative, and attractive graphics to help us do that.

Friday, July 13, 2012

Friday Flailing: Medical Gasses and the Law of the Excluded Middle

Buried
Aristotle never actually said "principium tertii
exclusi"
, mostly because he didn't speak Latin.
under the mass of attention and conversations surrounding the ACA last week, the FDA Safety and Innovation Act (FDASIA) contained a number of provisions that will have lasting effects in the pharmaceutical industry.  What little notice it did get tended to focus on PDUFA reauthorization and the establishment of fees for new generic drug applications (GDUFA).

(Tangent: other parts of the act are well worth looking into: NORD is happy about new incentives for rare disease research, the SWHR is happy about expanded reporting on sex and race in clinical trials, and antibiotic drug makers are happy about extra periods of market exclusivity.  A very good summary is available on the FDA Law Blog.)

So no one’s paid any attention to the Medical Gasses Safety Act, which formally defines medical gasses and even gives them their own Advisory Committee and user fees (I guess those will be MGUFAs?)

The Act’s opening definition is a bit of an eyebrow-raiser:
(2) The term ‘medical gas’ means a drug that is--
‘(A) manufactured or stored in a liquefied, non-liquefied, or cryogenic state; and
‘(B) is administered as a gas.
I’m clearly missing something here, because as far as I can tell, everything is either liquefied or non-liquefied.   This doesn’t seem to lend a lot of clarity to the definition.  And then, what to make of the third option?  How can there be a third option?  It’s been years since my college logic class, but I still remember the Law of the Excluded Middle – everything is either P or not-P. 

I was going to send an inquiry through to Congressman Leonard Lance (R-NJ), the bill’s original author, but his website regrets to inform me that he is “unable to reply to any email from constituents outside of the district.”

So I will remain trapped in Logical Limbo.  Enjoy your weekend.

Tuesday, July 10, 2012

Why Study Anything When You Already Know Everything?

If you’re a human being, in possession of one working, standard-issue human brain (and, for the remainder of this post, I’m going to assume you are), it is inevitable that you will fall victim to a wide variety of cognitive biases and mistakes.  Many of these biases result in our feeling much more certain about our knowledge of the world than we have any rational grounds for: from the Availability Heuristic, to the Dunning-Kruger Effect, to Confirmation Bias, there is an increasingly-well-documented system of ways in which we (and yes, that even includes you) become overconfident in our own judgment.

Over the years, scientists have developed a number of tools to help us overcome these biases in order to better understand the world.  In the biological sciences, one of our best tools is the randomized controlled trial (RCT).  In fact, randomization helps minimize biases so well that randomized trials have been suggested as a means of developing better governmental policy.

However, RCTs in general require an investment of time and money, and they need to be somewhat narrowly tailored.  As a result, they frequently become the target of people impatient with the process – especially those who perhaps feel themselves exempt from some of the above biases.

A shining example of this impatience-fortified-by-hubris can be
4 out of 5 Hammer Doctors agree:
the world is 98% nail.
found in a recent “Speaking of Medicine” blog post by Dr Trish Greenhalgh, with the mildly chilling title Less Research is Needed.  In it, the author finds a long list of things she feels to be so obvious that additional studies into them would be frivolous.  Among the things the author knows, beyond a doubt, is that patient education does not work, and electronic medical records are inefficient and unhelpful. 

I admit to being slightly in awe of Dr Greenhalgh’s omniscience in these matters. 

In addition to her “we already know the answer to this” argument, she also mixes in a completely different argument, which is more along the lines of “we’ll never know the answer to this”.  Of course, the upshot of that is identical: why bother conducting studies?  For this argument, she cites the example of coronary artery disease: since a large genomic study found only a small association with CAD heritability, Dr Greenhalgh tells us that any studies of different predictive methods is bound to fail and thus not worth the effort (she specifically mentions “genetic, epigenetic, transcriptomic, proteomic, metabolic and intermediate outcome variables” as things she apparently already knows will not add anything to our understanding of CAD). 

As studies grow more global, and as we adapt to massive increases in computer storage and processing ability, I believe we will see an increase in this type of backlash.  And while physicians can generally be relied on to be at the forefront of the demand for more, not less, evidence, it is quite possible that a vocal minority of physicians will adopt this kind of strongly anti-research stance.  Dr Greenhalgh suggests that she is on the side of “thinking” when she opposes studies, but it is difficult to see this as anything more than an attempt to shut down critical inquiry in favor of deference to experts who are presumed to be fully-informed and bias-free. 

It is worthwhile for those of us engaged in trying to understand the world to be aware of these kinds of threats, and to take them seriously.  Dr Greenhalgh writes glowingly of a 10-year moratorium on research – presumably, we will all simply rely on her expertise to answer our important clinical questions.

Friday, July 6, 2012

A placebo control is not a placebo effect

Following up on yesterday's post regarding a study of placebo-related information, it seems worthwhile to pause and expand on the difference between placebo controls and placebo effects.

The very first sentence of the study paper reflects a common, and rather muddled, belief about placebo-controlled trials:
Placebo groups are used in trials to control for placebo effects, i.e. those changes in a person's health status that result from the meaning and hope the person attributes to a procedure or event in a health care setting.
The best I can say about the above sentence is that in some (not all) trials, this accounts for some (not all) of the rationale for including a placebo group in the study design. 

There is no evidence that “meaning and hope” have any impact on HbA1C levels in patients with diabetes. The placebo effect only goes so far, and certainly doesn’t have much sway over most lab tests.  And yet we still conduct placebo-controlled trials in diabetes, and rightly so. 

To clarify, it may be helpful to break this into two parts:
  1. Most trials need a “No Treatment” arm. 
  2. Most “No Treatment” arms should be double-blind, which requires use of a placebo.
Let’s take these in order.

We need a “No Treatment” arm:
  • Where the natural progression of the disease is variable (e.g., many psychological disorders, such as depression, have ups and downs that are unrelated to treatment).  This is important if we want to measure the proportion of responders – for example, what percentage of diabetes patients got their HbA1C levels below 6.5% on a particular regimen.  We know that some patients will hit that target even without additional intervention, but we won’t know how many unless we include a control group.
  • Where the disease is self-limiting.  Given time, many conditions – the flu, allergies, etc. – tend to go away on their own.  Therefore, even an ineffective medication will look like it’s doing something if we simply test it on its own.  We need a control group to measure whether the investigational medication is actually speeding up the time to cure.
  • When we are testing the combination of an investigational medication with one or more existing therapies. We have a general sense of how well metformin will work in T2D patients, but the effect will vary from trial to trial.  So if I want to see how well my experimental therapy works when added to metformin, I’ll need a metformin-plus-placebo control arm to be able to measure the additional benefit, if any.

All of the above are especially important when the trial is selecting a group of patients with greater disease severity than average.  The process of “enriching” a trial by excluding patients with mild disease has the benefit of requiring many fewer enrolled patients to demonstrate a clinical effect.  However, it also will have a stronger tendency to exhibit “regression to the mean” for a number of patients, who will exhibit a greater than average improvement during the course of the trial.  A control group accurately measures this regression and helps us measure the true effect size.

So, why include a placebo?  Why not just have a control group of patients receiving no additional treatment?  There are compelling reasons:
  • To minimize bias in investigator assessments.  We most often think about placebo arms in relation to patient expectations, but often they are even more valuable in improving the accuracy of physician assessments.  Like all humans, physician investigators interpret evidence in light of their beliefs, and there is substantial evidence that unblinded assessments exaggerate treatment effects – we need the placebo to help maintain investigator blinding.
  • To improve patient compliance in the control arm.  If a patient is clearly not receiving an active treatment, it is often very difficult to keep him or her interested and engaged with the trial, especially if the trial requires frequent clinic visits and non-standard procedures (such as blood draws).  Retention in no-treatment trials can be much lower than in placebo-controlled trials, and if it drops low enough, the validity of any results can be thrown into question.
  • To accurately gauge adverse events.  Any problem(s) encountered are much more likely to be taken seriously – by both the patient and the investigator – if there is genuine uncertainty about whether the patient is on active treatment.  This leads to much more accurate and reliable reporting of adverse events.
In other words, even if the placebo effect didn’t exist, it would still be necessary and proper to conduct placebo-controlled trials.  The failure to separate “placebo control” from “placebo effect” yields some very muddled thinking (which was the ultimate point of my post yesterday).

Thursday, July 5, 2012

The Placebo Effect (No Placebo Necessary)

4 out of 5 non-doctors recommend starting
with "regular strength", and titrating up from there...
(Photo from inventedbyamother.com)
The modern clinical trial’s Informed Consent Form (ICF) is a daunting document.  It is packed with a mind-numbing litany of procedures, potential risks, possible adverse events, and substantial additional information – in general, if someone, somewhere, might find a fact relevant, then it gets into the form.  A run-of-the-mill ICF in a phase 2 or 3 pharma trial can easily run over 10 pages of densely worded text.  You might argue (and in fact, a number of people have, persuasively) that this sort of information overload reduces, rather than enhances, patient understanding of clinical trials.

So it is a bit of a surprise to read a paper arguing that patient information needs to be expanded because it does not contain enough information.  And it is yet even more surprising to read about what’s allegedly missing: more information about the potential effects of placebo.

Actually, “surprising” doesn’t really begin to cover it.  Reading through the paper is a borderline surreal experience.  The authors’ conclusions from “quantitative analysis”* of 45 Patient Information Leaflets for UK trials include such findings as
  • The investigational medication is mentioned more often than the placebo
  • The written purpose of the trial “rarely referred to the placebo”
  • “The possibility of continuing on the placebo treatment after the trial was never raised explicitly”
(You may need to give that last one a minute to sink in.)

Rather than seeing these as rather obvious conclusions, the authors recast them as ethical problems to be overcome.  From the article:
Information leaflets provide participants with a permanent written record about a clinical trial and its procedures and thus make an important contribution to the process of informing participants about placebos.
And from the PR materials furnished along with publication:
We believe the health changes associated with placebos should be better represented in the literature given to patients before they take part in a clinical trial.
There are two points that I think are important here – points that are sometimes missed, and very often badly blurred, even within the research community:

1.    The placebo effect is not caused by placebos.  There is nothing special about a “placebo” treatment that induces a unique effect.  The placebo effect can be induced by a lot of things, including active medications.  When we start talking about placebos as causal agents, we are engaging in fuzzy reasoning – placebo effects will not only be seen in the placebo arm, but will be evenly distributed among all trial participants.

2.    Changes in the placebo arm cannot be assumed to be caused by the placebo effect.  There are many reasons why we may observe health changes within a placebo group, and most of them have nothing to do with the “psychological and neurological mechanisms” of the placebo effect.  Giving trial participant information about the placebo effect may in fact be providing them with an entirely inaccurate description of what is going on.

ResearchBlogging.org Bishop FL, Adams AEM, Kaptchuk TJ, Lewith GT (2012). Informed Consent and Placebo Effects: A Content Analysis of Information Leaflets to Identify What Clinical Trial Participants Are Told about Placebos. PLoS ONE DOI: 10.1371/journal.pone.0039661  


(* Not related to the point at hand, but I would applaud efforts to establish some lower boundaries to what we are permitted to call "quantitative analysis".  Putting counts from 45 brochures into an Excel spreadsheet should fall well below any reasonable threshold.)

Monday, June 25, 2012

DIA: Tutti, ma non troppo

I will be at DIA 2012 this week, and hope to post about sessions and new offerings of interest as I encounter them.

There is an inherent tension in this year’s meeting theme, “Collaborate to Innovate”. Collaboration at its best exposes us to new ideas and, even more importantly, new ways of thinking. That novelty can catalyze our own thinking and bring us to flashes of insight, producing creative solutions to our most intransigent problems.

However, collaboration often requires accommodation – multiple stakeholders with their own wants and needs that must be fairly and collectively addressed. Too often, this has the unfortunate result of sacrificing creativity for the sake of finding a solution that everyone can accept. Rather than blazing a new trail (with all its attendant risks), we are forced to find an established trail wide enough to accommodate everybody. Some of the least creative work in our industry is done by joint ventures and partnered programs.

So perhaps the best way forward is: Collaborate, but not too much. We must take time to seek out and integrate new approaches, but real breakthroughs tend to require a lot of individual courage and determination.

Wednesday, June 20, 2012

Faster Trials are Better Trials

[Note: this post is an excerpt from a longer presentation I made at the DIA Clinical Data Quality Summit, April 24, 2012, entitled Delight the Sites: The Effect of Site/Sponsor Relationships on Site Performance.]

When considering clinical data collected from sites, what is the relationship between these two factors?
  • Quantity: the number of patients enrolled by the site
  • Quality: the rate of data issues per enrolled patient
When I pose this question to study managers and CRAs, I usually hear that they believe there is an inverse relationship at work. Specifically, most will tell me that high-enrolling sites run a great risk of getting "sloppy" with their data, and that they will sometimes need to caution sites to slow down in order to better focus on accurate data collection and reporting.

Obviously, this has serious implications for those of us in the business of accelerating clinical trials. If getting studies done faster comes at the expense of clinical data quality, then the value of the entire enterprise is called into question. As regulatory authorities take an increasingly skeptical attitude towards missing, inconsistent, and inaccurate data, we must strive to make data collection better, and absolutely cannot afford to risk making it worse.

As a result, we've started to look closely at a variety of data quality metrics to understand how they relate to the pace of patient recruitment. The results, while still preliminary, are encouraging.

Here is a plot of a large, recently-completed trial. Each point represents an individual research site, mapped by both speed (enrollment rate) and quality (protocol deviations). If faster enrolling caused data quality problems, we would expect to see a cluster of sites in the upper right quadrant (lots of patients, lots of deviations).

Click to enlarge: Enrollment and Quality


Instead, we see almost the opposite. Our sites with the fastest accrual produced, in general, higher quality data. Slow sites had a large variance, with not much relation to quality: some did well, but some of the worst offenders were among the slowest enrollers.

There are probably a number of reasons for this trend. I believe the two major factors at work here are:
  1. Focus. Having more patients in a particular study gives sites a powerful incentive to focus more time and effort into the conduct of that study.
  2. Practice. We get better at most things through practice and repetition. Enrolling more patients may help our site staff develop a much greater mastery of the study protocol.
The bottom line is very promising: accelerating your trial’s enrollment may have the added benefit of improving the overall quality of your data.

We will continue to explore the relationship between enrollment and various quality metrics, and I hope to be able to share more soon.

Tuesday, June 19, 2012

Pfizer Shocker: Patient Recruitment is Hard

In what appears to be, oddly enough, an exclusive announcement to Pharmalot, Pfizer will be discontinuing its much-discussed “Trial in a box”—a clinical study run entirely from a patient’s home. Study drug and other supplies would be shipped directly to each patient, with consent, communication, and data collection happening entirely via the internet.

The trial piloted a number of innovations, including some novel and intriguing Patient Reported Outcome (PRO) tools.  Unfortunately, most of these will likely not have been given the benefit of a full test, as the trial was killed due to low patient enrollment.

The fact that a trial designed to enroll less than 300 patients couldn’t meet its enrollment goal is sobering enough, but in this case the pain is even greater due to the fact that the study was not limited to site databases and/or catchment areas.  In theory, anyone with overactive bladder in the entire United States was a potential participant. 

And yet, it didn’t work.  In a previous interview with Pharmalot, Pfizer’s Craig Lipset mentions a number of recruitment channels – he specifically cites Facebook, Google, Patients Like Me, and Inspire, along with other unspecified “online outreach” – that drove “thousands” of impressions and “many” registrations, but these did not amount to, apparently, even close to the required number of consented patients. 

Two major questions come to mind:

1.    How were patients “converted” into the study?  One of the more challenging aspects of patient recruitment is often getting research sites engaged in the process.  Many – perhaps most – patients are understandably on the fence about being in a trial, and the investigator and study coordinator play the single most critical role in helping each patient make their decision. You cannot simply replace their skill and experience with a website (or “multi-media informed consent module”). 

2.    Did they understand the patient funnel?  I am puzzled by the mention of “thousands of hits” to the website.  That may seem like a lot, if you’re not used to engaging patients online, but it’s actually not necessarily so. 
Jakob Nielsen's famous "Lurker Funnel"
seems worth mentioning here...
Despite some of the claims made by patient communities, it is perfectly reasonable to expect that less than 1% of visitors (even somewhat pre-qualified visitors) will end up consenting into the study.  If you’re going to rely on the internet as your sole means of recruitment, you should plan on needing closer to 100,000 visitors (and, critically: negotiate your spending accordingly). 

In the prior interview, Lipset says:
I think some of the staunch advocates for using online and social media for recruitment are still reticent to claim silver bullet status and not use conventional channels in parallel. Even the most aggressive and bullish social media advocates, generally, still acknowledge you’re going to do this in addition to, and not instead of more conventional channels.

This makes Pfizer’s exclusive reliance on these channels all the more puzzling.  If no one is advocating disintermediating the sites and using only social media, then why was this the strategy?

I am confident that someone will try again with this type of trial in the near future.  Hopefully, the Pfizer experience will spur them to invest in building a more rigorous recruitment strategy before they start.

[Update 6/20: Lipset weighed in via the comments section of the Pharmalot article above to clarify that other DTP aspects of the trial were tested and "worked VERY well".  I am not sure how to evaluate that clarification, given the fact that those aspects couldn't have been tested on a very large number of patients, but it is encouraging to hear that more positive experiences may have come out of the study.]

Monday, June 18, 2012

Hey, You Got Your Chocolate in my Lipitor!

It's an idea that can only be described as brilliantly obvious: create a chocolate formulation of your pharmaceutical.

The original idea, as developed by England-based Lycotec, was to develop a mechanism that could more effectively transport drugs to the liver (and they appear to have produced some intriguing proof-of-concept results in increasing the potency of statins).

However, it's not much of a stretch to believe that patients might be more compliant if their daily medications became noticeably more delicious -- especially in chronic illnesses with few or no symptoms where adherence is considered a problem (such as hypertension).

Of course, if it works there will be safety issues to work through.  Child-proofing becomes a much more direct concern, for one.  But those issues seem addressable, and the potential benefits could be tremendous.

Sadly, the Lycotec website makes no mention of peanut butter.  Let's hope they're on it.

Thursday, January 12, 2012

Changing the Rules, Ever So Slightly, For Rare Diseases

At the end of last year, US Reps Cliff Stearns (R-FL) and Ed Towns (D-NY) introduced the Unlocking Lifesaving Treatments for Rare-Diseases Act (ULTRA for short). Despite what its bold name might imply (and unlike many recent congressional healthcare bills), ULTRA is actually a modest and carefully-though-out piece of legislation.

The main thrust of ULTRA is to enable developers of drugs for rare diseases to take advantage of the FDA’s existing Accelerated Approval pathway. Accelerated Approval reduces the initial burden of proof for manufacturers to bring a drug to market by conducting smaller clinical trials that measure a drug’s efficacy Rare Diseases Day: Feb 29, 2012against “surrogate” endpoints – that is, endpoints that do not directly measure the disease, but rather other factors that are associated with the disease. This can greatly reduce the time and cost of clinical trials.

To qualify for Accelerated Approval, however, trials for a new drug needs to meet two conditions:

  • The drug must be studied for treatment of a serious disease, with unmet medical need

  • There must be clinical evidence that improving the surrogate endpoint is reasonably likely to predict real benefit for those with the disease

ULTRA does not change the first criterion, only the second. For rare diseases, there is often not robust clinical evidence to support surrogate endpoints, so the bill alters the language slightly to permit the FDA to accept “reasonable scientific data that support and qualify the relevance of the surrogate endpoint”. In essence, the burden to prove the validity of the surrogate has been relaxed, permitting their use in pivotal trials, and using a surrogate may reduce the number of patients needed for a trial by as much as 50-75%.

Accelerated Approval still requires the drug manufacturer to complete full trials to more firmly establish the drug’s efficacy – it just allows the drug to be available on the market while those full trials are being conducted. ULTRA does not change this requirement for drugs of rare diseases, so in the end it is not lowering the standard for these drugs at all.

Obviously, anything can happen to a bill as it wends its way through congress. But as it is currently written, ULTRA is a highly rational, well-targeted adjustment to current law that should quickly show benefits for patients with rare diseases, and deserves quick action and passage.

(Further reading: the FDA Law Blog has an excellent review of the proposed act.)

Wednesday, January 4, 2012

Public Reporting of Patient Recruitment?

A few years back, I was working with a small biotech companies as they were ramping up to begin their first-ever pivotal trial. One of the team leads had just produced a timeline for enrollment in the trial, which was being circulated for feedback. Seeing as they had never conducted a trial of this size before, I was curious about how he had arrived at his estimate. My bigger clients had data from prior trials (both their own and their CRO’s) to use, but as far as I could tell, this client had absolutely nothing.

He proudly shared with me the secret of his methodology: he had looked up some comparable studies on ClinicalTrials.gov, counted the number of listed sites, and then compared that to the sample size and start/end dates to arrive at an enrollment rate for each study. He’d then used the average of all those rates to determine how long his study would take to complete.

If you’ve ever used ClinicalTrials.gov in your work, you can immediately determine the multiple, fatal flaws in that line of reasoning. The data simply doesn’t work like that. And to be fair, it wasn’t designed to work like that: the registry is intended to provide public access to what research is being done, not provide competitive intelligence on patient recruitment.

I’m therefore sympathetic, but skeptical, of a recent article in PLoS Medicine, Disclosure of Investigators' Recruitment Performance in Multicenter Clinical Trials: A Further Step for Research Transparency, that proposes to make reporting of enrollment a mandatory part of the trial registry. The authors would like to see not only actual randomized patients for each principal investigator, but also how that compares to their “recruitment target”.

The entire article is thought-provoking and worth a read. The authors’ main arguments in favor of mandatory recruitment reporting can be boiled down to:

  • Recruitment is many trials is poor, and public disclosure of recruitment performance will improve it
  • Sponsors, patient groups, and other stakeholders will be interested in the information
  • The data “could prompt queries” from other investigators

The first point is certainly the most compelling – improving enrollment in trials is at or near the top of everyone’s priority list – but the least supported by evidence. It is not clear to me that public scrutiny will lead to faster enrollment, and in fact in many cases it could quite conceivably lead to good investigators opting to not conduct a trial if they felt they risked being listed as “underperforming”. After all, there are many factors that will influence the total number of randomized patients at each site, and many of these are not under the PI’s control.

The other two points are true, in their way, but mandating that currently-proprietary information be given away to all competitors will certainly be resisted by industry. There are oceans of data that would be of interest to competitors, patient groups, and other investigators – that simply cannot be enough to justify mandating full public release.


Image: Philip Johnson's Glass House from Staib via Wikimedia Commons.