Showing posts with label clinicaltrials.gov. Show all posts
Showing posts with label clinicaltrials.gov. Show all posts

Wednesday, December 4, 2013

Half of All Trials Unpublished*

(*For certain possibly nonstandard uses of the word "unpublished")

This is an odd little study. Instead of looking at registered trials and following them through to publication, this study starts with a random sample of phase 3 and 4 drug trials that already had results posted on ClinicalTrials.gov - so in one, very obvious sense, none of the trials in this study went unpublished.

Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals
Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, Philippe Ravaud



But here the authors are concerned with publication in medical journals, and they were only able to locate journal articles covering about half (297/594) of trials with registered results. 

It's hard to know what to make of these results, exactly. Some of the "missing" trials may be published in the future (a possibility the authors acknowledge), some may have been rejected by one or more journals (FDAAA requires posting the results to ClinicalTrials.gov, but it certainly doesn't require journals to accept trial reports), and some may be pre-FDAAA trials that sponsors have retroactively added to ClinicalTrials.gov even though development on the drug has ceased.

It would have been helpful had the authors reported journal publication rates stratified by the year the trials completed - this would have at least given us some hints regarding the above. More than anything I still find it absolutely bizarre that in a study this small, the entire dataset is not published for review.

One potential concern is the search methodology used by the authors to match posted and published trials. If the easy routes (link to article already provided in ClinicalTrials.gov, or NCT number found in a PubMed search) failed, a manual search was performed:
The articles identified through the search had to match the corresponding trial in terms of the information registered at ClinicalTrials.gov (i.e., same objective, same sample size, same primary outcome, same location, same responsible party, same trial phase, and same sponsor) and had to present results for the primary outcome. 
So it appears that a reviewed had to score the journal article as an exact match on 8 criteria in order for the trial to be considered the same. That could easily lead to exclusion of journal articles on the basis of very insubstantial differences. The authors provide no detail on this; and again, that would be easy to verify if the study dataset was published. 

The reason I harp on this, and worry about the matching methodology, is that two of the authors of this study were also involved in a methodologically opaque and flawed study about clinical trial results posted in the JCO. In that study, as well, the authors appeared to use an incorrect methodology to identify published clinical trials. When I pointed the issues out, the corresponding author merely reiterated what was already (insufficiently) in the paper's Methodology section.

I find it strange beyond belief, and more than a little hypocritical, that researchers would use a public, taxpayer-funded database as the basis of their studies, and yet refuse to provide their data for public review. There are no technological or logistical issues preventing this kind of sharing, and there is an obvious ethical point in favor of transparency.

But if the authors are reasonably close to correct in their results, I'm not sure what to make of this study. 

The Nature article covering this study contend that
[T]he [ClinicalTrials.gov] database was never meant to replace journal publications, which often contain longer descriptions of methods and results and are the basis for big reviews of research on a given drug.
I suppose that some journal articles have better methodology sections, although this is far from universally true (and, like this study here, these methods are often quite opaquely described and don't support replication). As for results, I don't believe that's the case. In this study, the opposite was true: ClinicalTrial.gov results were generally more complete than journal results. And I have no idea why the registry wouldn't surpass journals as a more reliable and complete source of information for "big reviews".

Perhaps it is a function of my love of getting my hands dirty digging into the data, but if we are witnessing a turning point where journal articles take a distant back seat to the ClinicalTrials.gov registry, I'm enthused. ClinicalTrials.gov is public, free, and contains structured data; journal articles are expensive, unparsable, and generally written in painfully unclear language. To me, there's really no contest. 

ResearchBlogging.org Carolina Riveros, Agnes Dechartres, Elodie Perrodeau, Romana Haneef, Isabelle Boutron, & Philippe Ravaud (2013). Timing and Completeness of Trial Results Posted at ClinicalTrials.gov and Published in Journals PLoS Medicine DOI: 10.1371/journal.pmed.1001566

Wednesday, July 31, 2013

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


ResearchBlogging.org Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577

Wednesday, February 6, 2013

Our New Glass House: GSK's Commitment to AllTrials

No stones, please.

Yesterday, Alec Gaffney was kind enough to ask my opinion on GSK's signing on to the AllTrials initiative to bring full publication of clinical trial data. Some of my comments made it into his thorough and excellent article on the topic. Today, it seems worthwhile to expand on those comments.

1. It was going to happen: if not now, then soon

As mentioned in the article, I – and I suspect a fair number of other people in the industry -- already thought that full CSR publication was inevitable.  In the last half of 2012, the EMA began moving very decisively in the direction of clinical trial results publication, but that's just the culmination of a long series of steps towards greater transparency in the drug development process. Starting with the establishment of the ClinicalTrials.gov registry in 1997, we have witnessed a near-continuous increase in requirements for public registration and reporting around clinical trials.

It's important to see the AllTrials campaign in this context. If AllTrials didn't exist, something very much like it would have come along. We had been moving in this direction already (the Declaration of Helsinki called for full publication 4 years before AllTrials even existed), and the time was ripe. In fact, the only thing that I personally found surprising about AllTrials is that it started in the UK, since over the past 15 years most of the advances in trial transparency had come from the US.

2. It's a good thing, but it's not earth-shattering

Practically speaking, releasing the full CSR probably won't have a substantial impact on everyday clinical practice by doctors. The real meat of the CSR that doctors care about has already been mandated on ClinicalTrials.gov – full results posting was required by FDAAA in 2008.

There seems to be pretty clear evidence that many (perhaps most) practicing physicians do not read the complete articles on clinical trials already, but rather gravitate to abstracts and summary tables. It is highly doubtful, therefore, that a high percentage of physicians will actually read through a series of multi-hundred-page documents to try to glean fresh nuances about the drugs they prescribe.

Presumably, we'll see synopsizing services arise to provide executive summaries of the CSR data, and these may turn out to be popular and well-used. However, again, most of the really important and interesting bits are going to be on ClinicalTrial.gov in convenient table form (well, sort-of convenient – I admit I sometimes have a fair bit of difficulty sifting through the data that’s already posted there).

3. The real question: Where will we go with patient-level data?

In terms of actual positive impact on clinical research, GSK's prior announcement last October – making full patient-level data available to researchers – was a much bigger deal. That opens up the data to all sorts of potential re-analyses, including more thorough looks at patient subpopulations.

Tellingly, no one else in pharma has followed suit yet. I expect we’ll see a few more major AllTrials signatories in fairly short order (and I certainly intend to vigorously encourage all of my clients to be among the first wave of signatories!), but I don’t know that we’ll see anyone offer up the complete data sets.  To me, that will be the trend to watch over the next 2-3 years.

[Image: Transparent abode courtesy of flikr user seier+seier.]

Tuesday, February 5, 2013

The World's Worst Coin Trick?


Ben Goldacre – whose Bad Pharma went on sale today – is fond of using a coin-toss-cheating analogy to describe the problem of "hidden" trials in pharmaceutical clinical research. He uses it in this TED talk:
If it's a coin-toss conspiracy, it's the worst
one in the history of conspiracies.
If I flipped a coin a hundred times, but then withheld the results from you from half of those tosses, I could make it look as if I had a coin that always came up heads. But that wouldn't mean that I had a two-headed coin; that would mean that I was a chancer, and you were an idiot for letting me get away with it. But this is exactly what we blindly tolerate in the whole of evidence-based medicine. 
and in this recent op-ed column in the New York Times:
If I toss a coin, but hide the result every time it comes up tails, it looks as if I always throw heads. You wouldn't tolerate that if we were choosing who should go first in a game of pocket billiards, but in medicine, it’s accepted as the norm. 
I can understand why he likes using this metaphor. It's a striking and concrete illustration of his claim that pharmaceutical companies are suppressing data from clinical trials in an effort to make ineffective drugs appear effective. It also dovetails elegantly, from a rhetorical standpoint, with his frequently-repeated claim that "half of all trials go unpublished" (the reader is left to make the connection, but presumably it's all the tail-flip trials, with negative results, that aren't published).

Like many great metaphors, however, this coin-scam metaphor has the distinct weakness of being completely disconnected from reality.

If we can cheat and hide bad results, why do we have so many public failures? Pharmaceutical headlines in the past year were mostly dominated by a series of high-profile clinical trial failures. Even drugs that showed great promise in phase 2 failed in phase 3 and were discontinued. Less than 20% of drugs that start up in human testing ever make it to market ... and by some accounts it may be less than 10%. Pfizer had a great run of approvals to end 2012, with 4 new drugs approved by the FDA (including Xalkori, the exciting targeted therapy for lung cancer). And yet during that same period, the company discontinued 8 compounds.

Now, this wasn't always the case. Mandatory public registration of all pharma trials didn't begin in the US until 2005, and mandatory public results reporting came later than that. Before then, companies certainly had more leeway to keep results to themselves, with one important exception: the FDA still had the data. If you ran 4 phase 3 trials on a drug, and only 2 of them were positive, you might be able to only publish those 2, but when it came time to bring the drug to market, the regulators who reviewed your NDA report would be looking at the totality of evidence – all 4 trials. And in all likelihood you were going to be rejected.

That was definitely not an ideal situation, but even then it wasn't half as dire as Goldacre's Coin Toss would lead you to believe. The cases of ineffective drugs reaching the US market are extremely rare: if anything, FDA has historically been criticized for being too risk-averse and preventing drugs with only modest efficacy from being approved.

Things are even better now. There are no hidden trials, the degree of rigor (in terms of randomization, blinding, and analysis) has ratcheted up consistently over the last two decades, lots more safety data gets collected along the way, and phase 4 trials are actually being executed and reported in a timely manner. In fact, it is safe to say that medical research has never been as thorough and rigorous as it is today.

That doesn't mean we can’t get better. We can. But the main reason we can is that we got on the path to getting better 20 years ago, and continue to make improvements.

Buying into Goldacre's analogy requires you to completely ignore a massive flood of public evidence to the contrary. That may work for the average TED audience, but it shouldn't be acceptable at the level of rational public discussion.

Of course, Goldacre knows that negative trials are publicized all the time. His point is about publication bias. However, when he makes his point so broadly as to mislead those who are not directly involved in the R&D process, he has clearly stepped out of the realm of thoughtful and valid criticism.

I got my pre-ordered copy of Bad Pharma this morning, and look forward to reading it. I will post some additional thoughts on the book as I get through it. In the meantime,those looking for more can find a good skeptical review of some of Goldacre's data on the Dianthus Medical blog here and here.

[Image: Bad Pharma's Bad Coin courtesy of flikr user timparkinson.]

Friday, September 21, 2012

Trials in Alzheimer's Disease: The Long Road Ahead

Placebo Control is going purple today in support of Alzheimer’s Action Day.

A couple of clinical trial related thoughts on the ongoing struggle to find even one effective therapy (currently-approved drugs show some ability to slow the progression of AD, but not to effectively stop, much less reverse it):
  • The headlines so far this year have been dominated by the high-profile and incredibly expensive failures of bapineuzumab and solanezumab. However, these two are just the most recent of a long series of failures: a recent industry report tallies 101 investigational drugs that that have failed clinical trials or been suspended in development since 1998, against only 3 successes, an astonishing and painful 34:1 failure rate.

  • While we are big fans of the Alzhemier’s Association (just down the street from Placebo HQ here in Chicago) and the Alzheimer’s Foundation of America, it’s important to stress that the single most important contribution that patients and caregivers can make is to get involved in a clinical trial. That same report lists 93 new treatments currently being evaluated.  As of today, the US clinical trials registry lists 124 open trials for AD.  Many of these studies only require a few hundred participants, so each individual decision to enroll is important and immediately visible.

  • While all research is important, I want to single out the phenomenal work being done by ADNI, the Alzheimer’s Disease Neuroimaging Initiative. This is a public/private partnership that is
    collecting a vast amount of data – blood, cerebrospinal fluid, MRIs, and PET scans – on hundreds of AD patients and matched controls. Best of all, all of the data collected is published in a free, public database hosted by UCLA. Additional funding has recently led to the development of the ADNI-2 study, which will enroll 550 more participants.
Without a doubt, finding and testing effective medications for Alzheimer's Disease is going to take many more years of hard, frustrating work. It will be a path littered with many more failures and therapeutic dead-ends. Today's a good day to stop and recognize that fact, and strengthen our resolve to work together to end this disease.

Wednesday, August 22, 2012

The Case against Randomized Trials is, Fittingly, Anecdotal


I have a lot of respect for Eric Topol, and am a huge fan of his ongoing work to bring new mobile technology to benefit patients.

The Trial of the Future
However, I am simply baffled by this short video he recently posted on his Medscape blog. In it, he argues against the continued use of randomized controlled trials (RCTs) to provide evidence for or against new drugs.

His argument for this is two anecdotes: one negative, one positive. The negative anecdote is about the recently approved drug for melanoma, Zelboraf:
Well, that's great if one can do [RCTs], but often we're talking about needing thousands, if not tens of thousands, of patients for these types of clinical trials. And things are changing so fast with respect to medicine and, for example, genomically guided interventions that it's going to become increasingly difficult to justify these very large clinical trials. 
For example, there was a drug trial for melanoma and the mutation of BRAF, which is the gene that is found in about 60% of people with malignant melanoma. When that trial was done, there was a placebo control, and there was a big ethical charge asking whether it is justifiable to have a body count. This was a matched drug for the biology underpinning metastatic melanoma, which is essentially a fatal condition within 1 year, and researchers were giving some individuals a placebo.
First and foremost, this is simply factually incorrect on a couple extremely important points.

  1. Zelboraf was not approved based on any placebo-controlled trials. The phase 1 and phase 2 trials were both single-arm, open label studies. The only phase 3 trial run before FDA approval used dacarbazine in the comparator arm. In fact, of the 34 trials currently listed for Zelboraf on ClinicalTrials.gov, only one has a placebo control: it’s an adjuvant trial for patients whose melanoma has been completely resected, where no treatment may very well be the best option.
  2. The Zelboraf trials are not an example of “needing thousands, if not tens of thousands, of patients” for approval. The phase 3 trial enrolled 675 patients. Even adding the phase 1 and 2 trials doesn’t get us to 1000 patients.

Correcting these details take a lot away from the power of this single drug to be a good example of why we should stop using “the sanctimonious [sic] randomized, placebo-controlled clinical trial”.

The second anecdote is about a novel Alzheimer’s Disease candidate:
A remarkable example of a trial of the future was announced in May. For this trial, the National Institutes of Health is working with [Banner Alzheimer's Institute] in Arizona, the University of Antioquia in Colombia, and Genentech to have a specific mutation studied in a large extended family living in the country of Colombia in South America. There is a family of 8000 individuals who have the so-called Paisa mutation, a presenilin gene mutation, which results in every member of this family developing dementia in their 40s. 
Researchers will be testing a drug that binds amyloid, a monoclonal antibody, in just 300 family members. They're not following these patients out to the point of where they get dementia. Instead, they are using surrogate markers to see whether or not the process of developing Alzheimer's can be blocked using this drug. This is an exciting way in which we can study treatments that can potentially prevent Alzheimer's in a very well-demarcated, very restricted population with a genetic defect, and then branch out to a much broader population of people who are at risk for Alzheimer's. These are the types of trials of the future. 
There are some additional disturbing factual errors here – the extended family numbers about 5,000, not 8,000. And estimates of the prevalence of the mutation within that family appear to vary from about one-third to one-half, so it’s simply wrong to state that “every member of this family” will develop dementia.

However, those errors are relatively minor, and are completely overshadowed by the massive irony that this is a randomized, placebo-controlled trial. Only 100 of the 300 trial participants will receive the active study drug, crenezumab. The other 200 will be on placebo.

And so, the “trial of the future” held up as a way to get us out of using randomized, placebo-controlled trials is actually a randomized, placebo-controlled trial itself. I hope you can understand why I’m completely baffled that Topol thinks this is evidence of anything.

Finally, I have to ask: how is this the trial of the future, anyway? It is a short-term study on a highly-selected patient population with a specific genetic profile, measuring surrogate markers to provide proof of concept for later, larger studies. Is it just me, or does that sound exactly like the early lovastatin trials of the mid-1980’s, which tested cholesterol reduction in a small population of patients with severe heterozygous familial hypercholesterolemia? Back to the Future, indeed.


[Image: time-travelling supercar courtesy of Flickr user JoshBerglund19.]

Monday, August 13, 2012

Most* Clinical Trials Are Too** Small

* for some value of "most"
** for some value of "too"


[Note: this is a companion to a previous post, Clouding the Debate on Clinical Trials: Pediatric Edition.]

Are many current clinical trials underpowered? That is, will they not enroll enough patients to adequately answer the research question they were designed to answer? Are we wasting time and money – and even worse, the time and effort of researchers and patient-volunteers – by conducting research that is essentially doomed to produce clinically useless results?

That is the alarming upshot of the coverage on a recent study published in the Journal of the American Medical Association. This Duke Medicine News article was the most damning in its denunciation of the current state of clinical research:
Duke: Mega-Trial experts concerned
that not enough trials are mega-trials
Large-Scale Analysis Finds Majority of Clinical Trials Don't Provide Meaningful Evidence

The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making.
The study was also was also covered in many industry publications, as well as the mainstream news. Those stories were less sweeping in their indictment of the "clinical trial enterprise", but carried the same main theme: that an "analysis" had determined that most current clinical trial were "too small".

I have only one quibble with this coverage: the study in question didn’t demonstrate any of these points. At all.

The study is a simple listing of gross characteristics of interventional trials registered over a 6 year period. It is entirely descriptive, and limits itself entirely to data entered by the trial sponsor as part of the registration on ClinicalTrials.gov. It contains no information on the quality of the trials themselves.

That last part can’t be emphasized enough: the study contains no quality benchmarks. No analysis of trial design. No benchmarking of the completeness or accuracy of the data collected. No assessment of the clinical utility of the evidence produced. Nothing like that at all.

So, the question that nags at me is: how did we get from A to B? How did this mildly-interesting-and-entirely-descriptive data listing transform into a wholesale (and entirely inaccurate) denunciation of clinical research?

For starters, the JAMA authors divide registered trials into 3 enrollment groups: 1-100, 101-1000, and >1000. I suppose this is fine, although it should be noted that it is entirely arbitrary – there is no particular reason to divide things up this way, except perhaps a fondness for neat round numbers.

Trials within the first group are then labeled "small". No effort is made to explain why 100 patients represents a clinically important break point, but the authors feel confident to conclude that clinical research is "dominated by small clinical trials", because 62% of registered trials fit into this newly-invented category. From there, all you need is a completely vague yet ominous quote from the lead author. As US News put it:
The new report says 62 percent of the trials from 2007-2010 were small, with 100 or fewer participants. Only 4 percent had more than 1,000 participants.

"There are 330 new clinical trials being registered every week, and a number of them are very small and probably not as high quality as they could be," [lead author Dr Robert] Califf said.
"Probably not as high quality as they could be", while just vague enough to be unfalsifiable, is also not at all a consequence of the data as reported. So, through a chain of arbitrary decisions and innuendo, "less than 100" becomes "small" becomes "too small" becomes "of low quality".

Califf’s institution, Duke, appears to be particularly guilty of driving this evidence-free overinterpretation of the data, as seen in the sensationalistic headline and lede quoted above. However, it’s clear that Califf himself is blurring the distinction between what his study showed and what it didn’t:
"Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others," says Califf. "For example, 96 percent of clinical trials have ≤1000 participants, and 62 percent have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors, and consumers about the choices they must make to prevent and treat disease."
Maybe he’s right that these small studies will not be able to inform patients and doctors, but his study has provided absolutely no support for that statement.

When we build a protocol, there are actually only 3 major factors that go into determining how many patients we want to enroll:
  1. How big a difference we estimate the intervention will have compared to a control (the effect size)
  2. How much risk we’ll accept that we’ll get a false-positive (alpha) or false-negative (beta) result
  3. Occasionally, whether we need to add participants to better characterize safety and tolerability (as is frequently, and quite reasonably, requested by FDA and other regulators)
Quantity is not quality: enrolling too many participants in an investigational trial is unethical and a waste of resources. If the numbers determine that we should randomize 80 patients, it would make absolutely no sense to randomize 21 more so that the trial is no longer "too small". Those 21 participants could be enrolled in another trial, to answer another worthwhile question.

So the answer to "how big should a trial be?" is "exactly as big as it needs to be." Taking descriptive statistics and applying normative categories to them is unhelpful, and does not make for better research policy.


ResearchBlogging.org Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, & Tasneem A (2012). Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010. JAMA : the journal of the American Medical Association, 307 (17), 1838-47 PMID: 22550198

Wednesday, August 8, 2012

Testing Transparency with the TEST Act

A quick update on my last post regarding the enormously controversial -- but completely unmentioned -- requirement to publicly report all versions of clinical trial protocols on ClinicalTrials.gov: The New England Journal of Medicine has weighed in with an editorial strongly in support of the TEST Act. 

NEJM Editor-in-Chief Jeffrey Drazen at least mentions the supporting documents requirement, but only in part of one sentence, where he confusingly refers to the act "extending results reporting to include the deposition of consent and protocol documents approved by institutional review boards." The word "deposition" does not suggest actual publication, which the act clearly requires. 

I don't think this qualifies as an improvement in transparency about the impact the TEST Act, as written, would have. I'm not surprised when a trade publication like Center Watch recycles a press release into a news item. However, it wouldn't seem like too much to ask that NEJM editorials aspire to a moderately higher standard of critical inquiry.

Monday, August 6, 2012

Public Protocols? Burying the lede on the TEST Act

Not to be confused with the Test Act.
(via Luminarium)
4 Democratic members of Congress recently co-sponsored the TEST (Trial and Experimental Studies Transparency) Act, which is intended to expand the scope of mandatory registration of clinical trials. Coverage so far has been light, and mainly consists of uncritical recycling of the press release put out by congressman Markey’s office.

Which is unfortunate, because nowhere in that release is there a single mention of the bill’s most controversial feature: publication of clinical trial "supporting documents", including the patient’s Informed Consent Form (ICF) and, incredibly, the entire protocol (including any and all subsequent amendments to the protocol).

How Rep. Markey and colleagues managed to put out a 1,000-word press release without mentioning this detail is nothing short of remarkable. Is the intent to try to sneak this through?

Full public posting of every clinical trial protocol would represent an enormous shift in how R&D is conducted in this country (and, therefore, in the entire world). It would radically alter the dynamics of how pharmaceutical companies operate by ripping out a giant chunk of every company’s proprietary investment – essentially, confiscating and nationalizing their intellectual property. 

Maybe, ultimately, that would be a good thing.  But that’s by no means clear ... and quite likely not true. Either way, however, this is not the kind of thing you bury in legislation and hope no one notices.

[Full text of the bill is here (PDF).]

[UPDATE May 17, 2013: Apparently, the irony of not being transparent with the contents of your transparency law was just too delicious to pass up, as Markey and his co-sponsors reintroduced the bill yesterday. Once again, the updated press release makes no mention of the protocol requirement.]

Wednesday, January 4, 2012

Public Reporting of Patient Recruitment?

A few years back, I was working with a small biotech companies as they were ramping up to begin their first-ever pivotal trial. One of the team leads had just produced a timeline for enrollment in the trial, which was being circulated for feedback. Seeing as they had never conducted a trial of this size before, I was curious about how he had arrived at his estimate. My bigger clients had data from prior trials (both their own and their CRO’s) to use, but as far as I could tell, this client had absolutely nothing.

He proudly shared with me the secret of his methodology: he had looked up some comparable studies on ClinicalTrials.gov, counted the number of listed sites, and then compared that to the sample size and start/end dates to arrive at an enrollment rate for each study. He’d then used the average of all those rates to determine how long his study would take to complete.

If you’ve ever used ClinicalTrials.gov in your work, you can immediately determine the multiple, fatal flaws in that line of reasoning. The data simply doesn’t work like that. And to be fair, it wasn’t designed to work like that: the registry is intended to provide public access to what research is being done, not provide competitive intelligence on patient recruitment.

I’m therefore sympathetic, but skeptical, of a recent article in PLoS Medicine, Disclosure of Investigators' Recruitment Performance in Multicenter Clinical Trials: A Further Step for Research Transparency, that proposes to make reporting of enrollment a mandatory part of the trial registry. The authors would like to see not only actual randomized patients for each principal investigator, but also how that compares to their “recruitment target”.

The entire article is thought-provoking and worth a read. The authors’ main arguments in favor of mandatory recruitment reporting can be boiled down to:

  • Recruitment is many trials is poor, and public disclosure of recruitment performance will improve it
  • Sponsors, patient groups, and other stakeholders will be interested in the information
  • The data “could prompt queries” from other investigators

The first point is certainly the most compelling – improving enrollment in trials is at or near the top of everyone’s priority list – but the least supported by evidence. It is not clear to me that public scrutiny will lead to faster enrollment, and in fact in many cases it could quite conceivably lead to good investigators opting to not conduct a trial if they felt they risked being listed as “underperforming”. After all, there are many factors that will influence the total number of randomized patients at each site, and many of these are not under the PI’s control.

The other two points are true, in their way, but mandating that currently-proprietary information be given away to all competitors will certainly be resisted by industry. There are oceans of data that would be of interest to competitors, patient groups, and other investigators – that simply cannot be enough to justify mandating full public release.


Image: Philip Johnson's Glass House from Staib via Wikimedia Commons.