Showing posts with label Duke. Show all posts
Showing posts with label Duke. Show all posts

Tuesday, June 4, 2013

Can FDA's New Transparency Survive Avandia?

PDUFA V commitments signal a strong commitment to tolerance of open debate in the face of uncertainty.

I can admit to a rather powerful lack of enthusiasm when reading about interpersonal squabbles. It’s even worse in the scientific world: when I read about debates getting mired in personal attacks I tend to simply stop reading and move on to something else.

However, the really interesting part of this week’s meeting of an FDA joint Advisory Committee to discuss the controversial diabetes drug Avandia – at least in the sense of likely long-term impact – is not the scientific question under discussion, but the surfacing and handling of the raging interpersonal battle going on right now inside the Division of Cardiovascular and Renal Products. So I'll have to swallow my distaste and follow along with the drama.
Two words that make us mistrust Duke:
 Anil Potti Christian Laettner

Not that the scientific question at hand – does Avandia pose significant heart risks? – isn't interesting. It is. But if there’s one thing that everyone seems to agree on, it’s that we don’t have good data on the topic. Despite the re-adjudication of RECORD, no one trusts its design (and, ironically, the one trial with a design to rigorously answer the question was halted after intense pressure, despite an AdComm recommendation that it continue).  And no one seems particularly enthused about changing the current status of Avandia: in all likelihood it will continue to be permitted to be marketed under heavy restrictions. Rather than changing the future of diabetes, I suspect the committee will be content to let us slog along the same mucky trail.

The really interesting question, that will potentially impact CDER for years to come, is how it can function with frothing, open dissent among its staffers. As has been widely reported, FDA reviewer Tom Marciniak has written a rather wild and vitriolic assessment of the RECORD trial, excoriating most everyone involved. In a particularly stunning passage, Marciniak appears to claim that the entire output of anyone working at Duke University cannot be trusted because of the fraud committed by Duke cancer researcher Anil Potti:
I would have thought that the two words “Anil Potti” are sufficient for convincing anyone that Duke University is a poor choice for a contractor whose task it is to confirm the integrity of scientific research. 
(One wonders how far Marciniak is willing to take his guilt-by-association theme. Are the words “Cheng Yi Liang” sufficient to convince us that all FDA employees, including Marciniak, are poor choices for deciding matter relating to publicly-traded companies? Should I not comment on government activities because I’m a resident of Illinois (my two words: “Rod Blagojevich”)?)

Rather than censoring or reprimanding Marciniak, his supervisors have taken the extraordinary step of letting him publicly air his criticisms, and then they have in turn publicly criticized his methods and approach.

I have been unable to think of a similar situation at any regulatory agency. The tolerance for dissent being displayed by FDA is, I believe, completely unprecedented.

And that’s the cliffhanger for me: can the FDA’s commitment to transparency extend so far as to accommodate public disagreements about its own approval decisions? Can it do so even when the disagreements take an extremely nasty and inappropriate tone?

  • Rather than considering that open debate is a good thing, will journalists jump on the drama and portray agency leadership as weak and indecisive?
  • Will the usual suspects in Congress be able to exploit this disagreement for their own political gain? How many House subcommittees will be summoning Janet Woodcock in the coming weeks?

I think what Bob Temple and Norman Stockbridge are doing is a tremendous experiment in open government. If they can pull it off, it could force other agencies to radically rethink how they go about crafting and implementing regulations. However, I also worry that it is politically simply not a viable approach, and that the agency will ultimately be seriously hurt by attacks from the media and legislators.

Where is this coming from?

As part of its recent PDUFA V commitment, the FDA put out a fascinating draft document, Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. It didn't get a lot of attention when first published back in February (few FDA documents do). However, it lays out a rather bold vision for how the FDA can acknowledge the existence of uncertainty in its evaluation of new drugs. Its proposed structure even envisions an open and honest accounting of divergent interpretations of data:
When they're frothing at the mouth, even Atticus
doesn't let them publish a review
A framework for benefit-risk decision-making that summarizes the relevant facts, uncertainties, and key areas of judgment, and clearly explains how these factors influence a regulatory decision, can greatly inform and clarify the regulatory discussion. Such a framework can provide transparency regarding the basis of conflicting recommendations made by different parties using the same information.
(Emphasis mine.)

Of course, the structured framework here is designed to reflect rational disagreement. Marciniak’s scattershot insults are in many ways a terrible first case for trying out a new level of transparency.

The draft framework notes that safety issues, like Avandia, are some of the major areas of uncertainty in the regulatory process. Contrast this vision of coolly and systematically addressing uncertainties with the sad reality of Marciniak’s attack:
In contrast to the prospective and highly planned studies of effectiveness, safety findings emerge from a wide range of sources, including spontaneous adverse event reports, epidemiology studies, meta-analyses of controlled trials, or in some cases from randomized, controlled trials. However, even controlled trials, where the evidence of an effect is generally most persuasive, can sometimes provide contradictory and inconsistent findings on safety as the analyses are in many cases not planned and often reflect multiple testing. A systematic approach that specifies the sources of evidence, the strength of each piece of evidence, and draws conclusions that explain how the uncertainty weighed on the decision, can lead to more explicit communication of regulatory decisions. We anticipate that this work will continue beyond FY 2013.
I hope that work will continue beyond 2013. Thoughtful, open discussions of real uncertainties are one of the most worthwhile goals FDA can aspire to, even if it means having to learn how to do so without letting the Marciniaks of the world scuttle the whole endeavor.

[Update June 6: Further bolstering the idea that the AdCom is just as much about FDA's ability to transparently manage differences of expert opinion in the face of uncertain data, CDER Director Janet Woodcock posted this note on the FDA's blog. She's pretty explicit about the bigger picture:
There have been, and continue to be, differences of opinion and scientific disputes, which is not uncommon within the agency, stemming from varied conclusions about the existing data, not only with Avandia, but with other FDA-regulated products. 
At FDA, we actively encourage and welcome robust scientific debate on the complex matters we deal with — as such a transparent approach ensures the scientific input we need, enriches the discussions, and enhances our decision-making.
I agree, and hope she can pull it off.]

Monday, August 13, 2012

Most* Clinical Trials Are Too** Small

* for some value of "most"
** for some value of "too"


[Note: this is a companion to a previous post, Clouding the Debate on Clinical Trials: Pediatric Edition.]

Are many current clinical trials underpowered? That is, will they not enroll enough patients to adequately answer the research question they were designed to answer? Are we wasting time and money – and even worse, the time and effort of researchers and patient-volunteers – by conducting research that is essentially doomed to produce clinically useless results?

That is the alarming upshot of the coverage on a recent study published in the Journal of the American Medical Association. This Duke Medicine News article was the most damning in its denunciation of the current state of clinical research:
Duke: Mega-Trial experts concerned
that not enough trials are mega-trials
Large-Scale Analysis Finds Majority of Clinical Trials Don't Provide Meaningful Evidence

The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making.
The study was also was also covered in many industry publications, as well as the mainstream news. Those stories were less sweeping in their indictment of the "clinical trial enterprise", but carried the same main theme: that an "analysis" had determined that most current clinical trial were "too small".

I have only one quibble with this coverage: the study in question didn’t demonstrate any of these points. At all.

The study is a simple listing of gross characteristics of interventional trials registered over a 6 year period. It is entirely descriptive, and limits itself entirely to data entered by the trial sponsor as part of the registration on ClinicalTrials.gov. It contains no information on the quality of the trials themselves.

That last part can’t be emphasized enough: the study contains no quality benchmarks. No analysis of trial design. No benchmarking of the completeness or accuracy of the data collected. No assessment of the clinical utility of the evidence produced. Nothing like that at all.

So, the question that nags at me is: how did we get from A to B? How did this mildly-interesting-and-entirely-descriptive data listing transform into a wholesale (and entirely inaccurate) denunciation of clinical research?

For starters, the JAMA authors divide registered trials into 3 enrollment groups: 1-100, 101-1000, and >1000. I suppose this is fine, although it should be noted that it is entirely arbitrary – there is no particular reason to divide things up this way, except perhaps a fondness for neat round numbers.

Trials within the first group are then labeled "small". No effort is made to explain why 100 patients represents a clinically important break point, but the authors feel confident to conclude that clinical research is "dominated by small clinical trials", because 62% of registered trials fit into this newly-invented category. From there, all you need is a completely vague yet ominous quote from the lead author. As US News put it:
The new report says 62 percent of the trials from 2007-2010 were small, with 100 or fewer participants. Only 4 percent had more than 1,000 participants.

"There are 330 new clinical trials being registered every week, and a number of them are very small and probably not as high quality as they could be," [lead author Dr Robert] Califf said.
"Probably not as high quality as they could be", while just vague enough to be unfalsifiable, is also not at all a consequence of the data as reported. So, through a chain of arbitrary decisions and innuendo, "less than 100" becomes "small" becomes "too small" becomes "of low quality".

Califf’s institution, Duke, appears to be particularly guilty of driving this evidence-free overinterpretation of the data, as seen in the sensationalistic headline and lede quoted above. However, it’s clear that Califf himself is blurring the distinction between what his study showed and what it didn’t:
"Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others," says Califf. "For example, 96 percent of clinical trials have ≤1000 participants, and 62 percent have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors, and consumers about the choices they must make to prevent and treat disease."
Maybe he’s right that these small studies will not be able to inform patients and doctors, but his study has provided absolutely no support for that statement.

When we build a protocol, there are actually only 3 major factors that go into determining how many patients we want to enroll:
  1. How big a difference we estimate the intervention will have compared to a control (the effect size)
  2. How much risk we’ll accept that we’ll get a false-positive (alpha) or false-negative (beta) result
  3. Occasionally, whether we need to add participants to better characterize safety and tolerability (as is frequently, and quite reasonably, requested by FDA and other regulators)
Quantity is not quality: enrolling too many participants in an investigational trial is unethical and a waste of resources. If the numbers determine that we should randomize 80 patients, it would make absolutely no sense to randomize 21 more so that the trial is no longer "too small". Those 21 participants could be enrolled in another trial, to answer another worthwhile question.

So the answer to "how big should a trial be?" is "exactly as big as it needs to be." Taking descriptive statistics and applying normative categories to them is unhelpful, and does not make for better research policy.


ResearchBlogging.org Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, & Tasneem A (2012). Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010. JAMA : the journal of the American Medical Association, 307 (17), 1838-47 PMID: 22550198