Showing posts with label FDA. Show all posts
Showing posts with label FDA. Show all posts

Saturday, March 18, 2017

The Streetlight Effect and 505(b)(2) approvals

It is a surprisingly common peril among analysts: we don’t have the data to answer the question we’re interested in, so we answer a related question where we do have data. Unfortunately, the new answer turns out to shed no light on the original interesting question.

This is sometimes referred to as the Streetlight Effect – a phenomenon aptly illustrated by Mutt and Jeff over half a century ago:


This is the situation that the Tufts Center for the Study of Drug Development seems to have gotten itself into in its latest "Impact Report".  It’s worth walking through the process of how an interesting question ends up in an uninteresting answer.

So, here’s an interesting question:
My company owns a drug that may be approvable through FDA’s 505(b)(2) pathway. What is the estimated time and cost difference between pursuing 505(b)(2) approval and conventional approval?
That’s "interesting", I suppose I should add, for a certain subset of folks working in drug development and commercialization. It’s only interesting to that peculiar niche, but for those people I suspect it’s extremely interesting - because it is a real situation that a drug company may find itself in, and there are concrete consequences to the decision.

Unfortunately, this is also a really difficult question to answer. As phrased, you'd almost need a randomized trial to answer it. Let’s create a version which is less interesting but easier to answer:
What are the overall development time and cost differences between drugs seeking approval via 505(b)(2) and conventional pathways?
This is much easier to answer, as pharmaceutical companies could look back on development times and costs of all their compounds, and directly compare the different types. It is, however, a much less useful question. Many new drugs are simply not eligible for 505(b)(2) approval. If those drugs
Extreme qualitative differences of 505(b)(2) drugs.
Source: Thomson Reuters analysis via RAPS
are substantially different in any way (riskier, more novel, etc.), then they will change the comparison in highly non-useful ways. In fact, in 2014, only 1 drug classified as a New Molecular Entity (NME) went through 505(b)(2) approval, versus 32 that went through conventional approval. And in fact, there are many qualities that set 505(b)(2) drugs apart.

So we’re likely to get a lot of confounding factors in our comparison, and it’s unclear how the answer would (or should) guide us if we were truly trying to decide which route to take for a particular new drug. It might help us if we were trying to evaluate a large-scale shift to prioritizing 505(b)(2) eligible drugs, however.

Unfortunately, even this question is apparently too difficult to answer. Instead, the Tufts CSDD chose to ask and answer yet another variant:
What is the difference in time that it takes the FDA for its internal review process between 505(b)(2) and conventionally-approved drugs?
This question has the supreme virtue of being answerable. In fact, I believe that all of the data you’d need is contained within the approval letter that FDA posts publishes for each new approved drug.

But at the same time, it isn’t a particularly interesting question anymore. The promise of the 505(b)(2) pathway is that it should reduce total development time and cost, but on both those dimensions, the report appears to fall flat.
  • Cost: This analysis says nothing about reduced costs – those savings would mostly come in the form of fewer clinical trials, and this focuses entirely on the FDA review process.
  • Time: FDA review and approval is only a fraction of a drug’s journey from patent to market. In fact, it often takes up less than 10% of the time from initial IND to approval. So any differences in approval times will likely easily be overshadowed by differences in time spent in development. 
But even more fundamentally, the problem here is that this study gives the appearance of providing an answer to our original question, but in fact is entirely uninformative in this regard. The accompanying press release states:
The 505(b)(2) approval pathway for new drug applications in the United States, aimed at avoiding unnecessary duplication of studies performed on a previously approved drug, has not led to shorter approval times.
This is more than a bit misleading. The 505(b)(2) statute does not in any way address approval timelines – that’s not it’s intent. So showing that it hasn’t led to shorter approval times is less of an insight than it is a natural consequence of the law as written.

Most importantly, showing that 505(b)(2) drugs had a longer average approval time than conventionally-approved drugs in no way should be interpreted as adding any evidence to the idea that those drugs were slowed down by the 505(b)(2) process itself. Because 505(b)(2) drugs are qualitatively different from other new molecules, this study can’t claim that they would have been developed faster had their owners initially chosen to go the route of conventional approval. In fact, such a decision might have resulted in both increased time in trials and increased approval time.

This study simply is not designed to provide an answer to the truly interesting underlying question.

[Disclosure: the above review is based entirely on a CSDD press release and summary page. The actual report costs $125, which is well in excess of this blog’s expense limit. It is entirely possible that the report itself contains more-informative insights, and I’ll happily update that post if that should come to my attention.]

Monday, January 6, 2014

Can a Form Letter from FDA "Blow Your Mind"?

Adam Feuerstein appears to be a generally astute observer of the biotech scene. As a finance writer, he's accosted daily with egregiously hyped claims from small drug companies and their investors, and I think he tends to do an excellent job of spotting cases where breathless excitement is unaccompanied by substantive information.


However, Feuerstein's healthy skepticism seems to have abandoned him last year in the case of a biotech called Sarepta Therapeutics, who released some highly promising - but also incredibly limited - data on their treatment for Duchenne muscular dystrophy. After a disappointing interaction with the FDA, Sarepta's stock dropped, and Feuerstein appeared to realize that he'd lost some objectivity on the topic.


However, with the new year comes new optimism, and Feuerstein seems to be back to squinting hard at tea leaves - this time in the case of a form letter from the FDA.


He claims that the contents of the letter will "blow your mind". To him, the key passage is:


We understand that you feel that eteplirsen is highly effective, and may be confused by what you have read or heard about FDA's actions on eteplirsen. Unfortunately, the information reported in the press or discussed in blogs does not necessarily reflect FDA's position. FDA has reached no conclusions about the possibility of using accelerated approval for any new drug for the treatment of Duchenne muscular dystrophy, and for eteplirsen in particular.


Feuerstein appears to think that the fact that FDA "has reached no conclusions" may mean that it may be "changing its mind". To which he adds: "Wow!"
Adam Feuerstein: This time,
too much froth, not enough coffee?


I'm not sure why he thinks that. As far as I can tell, the FDA will never reach a conclusion like this before its gone through the actual review process. After all, if FDA already knows the answer before the full review, what would the point of the review even be? It would seem a tremendous waste of agency resources. Not to mention how non-level the playing field would be if some companies were given early yes/no decisions while others had to go through a full review.


It seems fair to ask: is this a substantive change by FDA review teams, or would it be their standard response to any speculation about whether and how they would approve or reject a new drug submission? Can Feuerstein point to other cases where FDA has given a definitive yes or no on an application before the application was ever filed? I suspect not, but am open to seeing examples.


A more plausible theory for this letter is that the FDA is attempting a bit of damage control. It is not permitted to share anything specific it said or wrote to Sarepta about the drug, and has come under some serious criticism for “rejecting” Sarepta’s Accelerated Approval submission. The agency has been sensitive to the DMD community, even going so far as to have Janet Woodcock and Bob Temple meet with DMD parents and advocates last February. Sarepta has effectively positioned FDA as the reason for it’s delay in approval, but no letters have actually been published, so the conversation has been a bit one-sided. This letter appears to be an attempt at balancing perspectives a bit, although the FDA is still hamstrung by its restriction on relating any specific communications.

Ultimately, this is a form letter that contains no new information: FDA has reached no conclusions because FDA is not permitted to reach conclusions until it has completed a fair and thorough review, which won't happen until the drug is actually submitted for approval.

We talk about "transparency" in terms of releasing clinical trials data, but to me there is a great case to be made for increase regulatory transparency. The benefits to routine publication of most FDA correspondence and meeting results (including such things as Complete Response letters, explaining FDA's thinking when it rejects new applications) would actually go a long way towards improving public understanding of the drug review and approval process.

Tuesday, September 3, 2013

Every Unhappy PREA Study is Unhappy in its Own Way

“Children are not small adults.” We invoke this saying, in a vague and hand-wavy manner, whenever we talk about the need to study drugs in pediatric populations. It’s an interesting idea, but it really cries out for further elaboration. If they’re not small adults, what are they? Are pediatric efficacy and safety totally uncorrelated with adult efficacy and safety? Or are children actually kind of like small adults in certain important ways?

Pediatric post-marketing studies have been completed for over 200 compounds in the years since BPCA (2002, offering a reward of 6 months extra market exclusivity/patent life to any drug conducting requested pediatric studies) and PREA (2007, giving FDA power to require pediatric studies) were enacted. I think it is fair to say that at this point, it would be nice to have some sort of comprehensive idea of how FDA views the risks associated with treating children with medications tested only on adults. Are they in general less efficacious? More? Is PK in children predictable from adult studies a reasonable percentage of the time, or does it need to be recharacterized with every drug?

Essentially, my point is that BPCA/PREA is a pretty crude tool: it is both too broad in setting what is basically a single standard for all new adult medications, and too vague as to what exactly that standard is.

In fact, a 2008 published review from FDA staffers and a 2012 Institute of Medicine report both show one clear trend: in a significant majority of cases, pediatric studies resulted in validating the adult medication in children, mostly with predictable dose and formulation adjustments (77 of 108 compounds (71%) in the FDA review, and 27 of 45 (60%) in the IOM review, had label changes that simply reflected that use of the drug was acceptable in younger patients).

So, it seems, most of the time, children are in fact not terribly unlike small adults.

But it’s also true that the percentages of studies that show lack of efficacy, or bring to light a new safety issue with the drug’s use in children, is well above zero. There is some extremely important information here.

To paraphrase John Wanamaker: we know that half our PREA studies are a waste of time; we just don’t know which half.

This would seem to me to be the highest regulatory priority – to be able to predict which new drugs will work as expected in children, and which may truly require further study. After a couple hundred compounds have gone through this process, we really ought to be better positioned to understand how certain pharmacological properties might increase or decrease the risks of drugs behaving differently than expected in children. Unfortunately, neither the FDA nor the IOM papers venture any hypotheses about this – both end up providing long lists of examples of certain points, but not providing any explanatory mechanisms that might enable us to engage in some predictive risk assessment.

While FDASIA did not advance PREA in terms of more rigorously defining the scope of pediatric requirements (or, better yet, requiring FDA to do so), it did address one lingering concern by requiring that FDA publish non-compliance letters for sponsors that do not meet their commitments. (PREA, like FDAAA, is a bit plagued by lingering suspicions that it’s widely ignored by industry.)

The first batch of letters and responses has been published, and it offers some early insights into the problems engendered by the nebulous nature of PREA and its implementation.

These examples, unfortunately, are still a bit opaque – we will need to wait on the FDA responses to the sponsors to see if some of the counter-claims are deemed credible. In addition, there are a few references to prior deferral requests, but the details of the request (and rationales for the subsequent FDA denials) do not appear to be publicly available. You can read FDA’s take on the new postings on their blog, or in the predictably excellent coverage from Alec Gaffney at RAPS.

Looking through the first 4 drugs publicly identified for noncompliance, the clear trend is that there is no trend. All these PREA requirements have been missed for dramatically different reasons.

Here’s a quick rundown of the drugs at issue – and, more interestingly, the sponsor responses:

1. Renvela - Genzyme (full response)

Genzyme appears to be laying responsibility for the delay firmly at FDA’s feet here, basically claiming that FDA continued to pile on new requirements over time:
Genzyme’s correspondence with the FDA regarding pediatric plans and design of this study began in 2006 and included a face to face meeting with FDA in May 2009. Genzyme submitted 8 revisions of the pediatric study design based on feedback from FDA including that received in 4 General Advice Letters. The Advice Letter dated February 17, 2011  contained further recommendations on the study design, yet still required the final clinical study report  by December 31, 2011.
This highlights one of PREA’s real problems: the requirements as specified in most drug approval letters are not specific enough to fully dictate the study protocol. Instead, there is a lot of back and forth between the sponsor and FDA, and it seems that FDA does not always fully account for their own contribution to delays in getting studies started.

2. Hectorol - Genzyme (full response)

In this one, Genzyme blames the FDA not for too much feedback, but for none at all:
On December 22, 2010, Genzyme submitted a revised pediatric development plan (Serial No. 212) which was intended to address FDA feedback and concerns that had been received to date. This submission included proposed protocol HECT05310. [...] At this time, Genzyme has not received feedback from the FDA on the protocol included in the December 22, 2010 submission.
If this is true, it appears extremely embarrassing for FDA. Have they really not provided feedback in over 2.5 years, and yet still sending noncompliance letters to the sponsor? It will be very interesting to see an FDA response to this.

3. Cleviprex – The Medicines Company (full response)

This is the only case where the pharma company appears to be clearly trying to game the system a bit. According to their response:
Recognizing that, due to circumstances beyond the company’s control, the pediatric assessment could not be completed by the due date, The Medicines Company notified FDA in September 2010, and sought an extension. At that time, it was FDA’s view that no extensions were available. Following the passage of FDASIA, which specifically authorizes deferral extensions, the company again sought a deferral extension in December 2012. 
So, after hearing that they had to move forward in 2010, the company promptly waited 2 years to ask for another extension. During that time, the letter seems to imply that they did not try to move the study forward at all, preferring to roll the dice and wait for changing laws to help them get out from under the obligation.

4. Twinject/Adrenaclick – Amedra (full response)

The details of this one are heavily redacted, but it may also be a bit of gamesmanship from the sponsor. After purchasing the injectors, Amedra asked for a deferral. When the deferral was denied, they simply asked for the requirements to be waived altogether. That seems backwards, but perhaps there's a good reason for that.

---

Clearly, 4 drugs is not a sufficient sample to say anything definitive, especially when we don't have FDA's take on the sponsor responses. However, it is interesting that these 4 cases seem to reflect an overall pattern with BCPA and PREA - results are scattershot and anecdotal. We could all clearly benefit from a more systematic assessment of why these trials work and why some of them don't, with a goal of someday soon abandoning one-size-fits-all regulation and focusing resources where they will do the most good.

Wednesday, July 31, 2013

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


ResearchBlogging.org Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577

Tuesday, June 4, 2013

Can FDA's New Transparency Survive Avandia?

PDUFA V commitments signal a strong commitment to tolerance of open debate in the face of uncertainty.

I can admit to a rather powerful lack of enthusiasm when reading about interpersonal squabbles. It’s even worse in the scientific world: when I read about debates getting mired in personal attacks I tend to simply stop reading and move on to something else.

However, the really interesting part of this week’s meeting of an FDA joint Advisory Committee to discuss the controversial diabetes drug Avandia – at least in the sense of likely long-term impact – is not the scientific question under discussion, but the surfacing and handling of the raging interpersonal battle going on right now inside the Division of Cardiovascular and Renal Products. So I'll have to swallow my distaste and follow along with the drama.
Two words that make us mistrust Duke:
 Anil Potti Christian Laettner

Not that the scientific question at hand – does Avandia pose significant heart risks? – isn't interesting. It is. But if there’s one thing that everyone seems to agree on, it’s that we don’t have good data on the topic. Despite the re-adjudication of RECORD, no one trusts its design (and, ironically, the one trial with a design to rigorously answer the question was halted after intense pressure, despite an AdComm recommendation that it continue).  And no one seems particularly enthused about changing the current status of Avandia: in all likelihood it will continue to be permitted to be marketed under heavy restrictions. Rather than changing the future of diabetes, I suspect the committee will be content to let us slog along the same mucky trail.

The really interesting question, that will potentially impact CDER for years to come, is how it can function with frothing, open dissent among its staffers. As has been widely reported, FDA reviewer Tom Marciniak has written a rather wild and vitriolic assessment of the RECORD trial, excoriating most everyone involved. In a particularly stunning passage, Marciniak appears to claim that the entire output of anyone working at Duke University cannot be trusted because of the fraud committed by Duke cancer researcher Anil Potti:
I would have thought that the two words “Anil Potti” are sufficient for convincing anyone that Duke University is a poor choice for a contractor whose task it is to confirm the integrity of scientific research. 
(One wonders how far Marciniak is willing to take his guilt-by-association theme. Are the words “Cheng Yi Liang” sufficient to convince us that all FDA employees, including Marciniak, are poor choices for deciding matter relating to publicly-traded companies? Should I not comment on government activities because I’m a resident of Illinois (my two words: “Rod Blagojevich”)?)

Rather than censoring or reprimanding Marciniak, his supervisors have taken the extraordinary step of letting him publicly air his criticisms, and then they have in turn publicly criticized his methods and approach.

I have been unable to think of a similar situation at any regulatory agency. The tolerance for dissent being displayed by FDA is, I believe, completely unprecedented.

And that’s the cliffhanger for me: can the FDA’s commitment to transparency extend so far as to accommodate public disagreements about its own approval decisions? Can it do so even when the disagreements take an extremely nasty and inappropriate tone?

  • Rather than considering that open debate is a good thing, will journalists jump on the drama and portray agency leadership as weak and indecisive?
  • Will the usual suspects in Congress be able to exploit this disagreement for their own political gain? How many House subcommittees will be summoning Janet Woodcock in the coming weeks?

I think what Bob Temple and Norman Stockbridge are doing is a tremendous experiment in open government. If they can pull it off, it could force other agencies to radically rethink how they go about crafting and implementing regulations. However, I also worry that it is politically simply not a viable approach, and that the agency will ultimately be seriously hurt by attacks from the media and legislators.

Where is this coming from?

As part of its recent PDUFA V commitment, the FDA put out a fascinating draft document, Structured Approach to Benefit-Risk Assessment in Drug Regulatory Decision-Making. It didn't get a lot of attention when first published back in February (few FDA documents do). However, it lays out a rather bold vision for how the FDA can acknowledge the existence of uncertainty in its evaluation of new drugs. Its proposed structure even envisions an open and honest accounting of divergent interpretations of data:
When they're frothing at the mouth, even Atticus
doesn't let them publish a review
A framework for benefit-risk decision-making that summarizes the relevant facts, uncertainties, and key areas of judgment, and clearly explains how these factors influence a regulatory decision, can greatly inform and clarify the regulatory discussion. Such a framework can provide transparency regarding the basis of conflicting recommendations made by different parties using the same information.
(Emphasis mine.)

Of course, the structured framework here is designed to reflect rational disagreement. Marciniak’s scattershot insults are in many ways a terrible first case for trying out a new level of transparency.

The draft framework notes that safety issues, like Avandia, are some of the major areas of uncertainty in the regulatory process. Contrast this vision of coolly and systematically addressing uncertainties with the sad reality of Marciniak’s attack:
In contrast to the prospective and highly planned studies of effectiveness, safety findings emerge from a wide range of sources, including spontaneous adverse event reports, epidemiology studies, meta-analyses of controlled trials, or in some cases from randomized, controlled trials. However, even controlled trials, where the evidence of an effect is generally most persuasive, can sometimes provide contradictory and inconsistent findings on safety as the analyses are in many cases not planned and often reflect multiple testing. A systematic approach that specifies the sources of evidence, the strength of each piece of evidence, and draws conclusions that explain how the uncertainty weighed on the decision, can lead to more explicit communication of regulatory decisions. We anticipate that this work will continue beyond FY 2013.
I hope that work will continue beyond 2013. Thoughtful, open discussions of real uncertainties are one of the most worthwhile goals FDA can aspire to, even if it means having to learn how to do so without letting the Marciniaks of the world scuttle the whole endeavor.

[Update June 6: Further bolstering the idea that the AdCom is just as much about FDA's ability to transparently manage differences of expert opinion in the face of uncertain data, CDER Director Janet Woodcock posted this note on the FDA's blog. She's pretty explicit about the bigger picture:
There have been, and continue to be, differences of opinion and scientific disputes, which is not uncommon within the agency, stemming from varied conclusions about the existing data, not only with Avandia, but with other FDA-regulated products. 
At FDA, we actively encourage and welcome robust scientific debate on the complex matters we deal with — as such a transparent approach ensures the scientific input we need, enriches the discussions, and enhances our decision-making.
I agree, and hope she can pull it off.]

Wednesday, April 17, 2013

But WHY is There an App for That?


FDA should get out of the data entry business.

There’s an app for that!

We've all heard that more than enough times. It started as a line in an ad and has exploded into one of the top meme-mantras of our time: if your organization doesn't have an app, it would seem, you'd better get busy developing one.

Submitting your coffee shop review? Yes!
Submitting a serious med device problem? Less so!
So the fact that the FDA is promising to release a mobile app for physicians to report adverse events with devices is hardly shocking. But it is disappointing.

The current process for physicians and consumers to voluntarily submit adverse even information about drugs or medical devices is a bit cumbersome. The FDA's form 3500 requests quite a lot of contextual data: patient demographics, specifics of the problem, any lab tests or diagnostics that were run, and the eventual outcome. That makes sense, because it helps them to better understand the nature of the issue, and more data should provide a better ability spotting trends over time.

The drawback, of course, is that this makes data entry slower and more involved, which probably reduces the total number of adverse events reported – and, by most estimates, the number of reports is far lower than the total amount of actual events.

And that’s the problem: converting a data-entry-intensive paper or online activity into a data-entry-intensive mobile app activity just modernizes the hassle. In fact, it probably makes it worse, as entering large amounts of free-form text is not, shall we say, a strong point of mobile apps.

The solution here is for FDA to get itself out of the data entry business. Adverse event information – and the critical contextual data to go with it – already exist in a variety of data streams. Rather than asking physicians and patients to re-enter this data, FDA should be working on interfaces for them to transfer the data that’s already there. That means developing a robust set of Application Programming Interfaces (APIs) that can be used by the teams who are developing medical data apps – everything from hospital EMR systems, to physician reference apps, to patient medication and symptom tracking apps. Those applications are likely to have far more data inside them than FDA currently receives, so enabling more seamless transmission of that data should be a top priority.

(A simple analogy might be helpful here: when an application on your computer or phone crashes, the operating system generally bundles any diagnostic information together, then asks if you want to submit the error data to the manufacturer. FDA should be working with external developers on this type of “1-click” system rather that providing user-unfriendly forms to fill out.)

A couple other programs would seem to support this approach:

  • The congressionally-mandated Sentinel Initiative, which requires FDA to set up programs to tap into active data streams, such as insurance claims databases, to detect potential safety signals
  • A 2012 White House directive for all Federal agencies pursue the development of APIs as part of a broader "digital government" program

(Thanks to RF's Alec Gaffney for pointing out the White House directive.)

Perhaps FDA is already working on APIs for seamless adverse event reporting, but I could not find any evidence of their plans in this area. And even if they are, building a mobile app is still a waste of time and resources.

Sometimes being tech savvy means not jumping on the current tech trend: this is clearly one of those times. Let’s not have an app for that.

(Smartphone image via flikr user DigiEnable.)

Friday, February 8, 2013

The FDA’s Magic Meeting


Can you shed three years of pipeline flab with this one simple trick?

"There’s no trick to it ... it’s just a simple trick!" -Brad Goodman

Getting a drug to market is hard. It is hard in every way a thing can be hard: it takes a long time, it's expensive, it involves a process that is opaque and frustrating, and failure is a much more likely outcome than success. Boston pioneers pointing their wagons west in 1820 had far better prospects for seeing the Pacific Ocean than a new drug, freshly launched into human trials, will ever have for earning a single dollar in sales.

Exact numbers are hard to come by, but the semi-official industry estimates are: about 6-8 years, a couple billion dollars, and more than 80% chance of ultimate failure.

Is there a secret handshake? Should we bring doughnuts?
(We should probably bring doughnuts.)
Finding ways to reduce any of those numbers is one of the premier obsessions of the pharma R&D world. We explore new technologies and standards, consider moving our trials to sites in other countries, consider skipping the sites altogether and going straight to the patient, and hire patient recruitment firms* to speed up trial enrollment. We even invent words to describe our latest and awesomest attempts at making development faster, better, and cheaper.

But perhaps all we needed was another meeting.

A recent blog post from Anne Pariser, an Associate Director at FDA's Center for Drug Evaluation and Research suggests that attending a pre-IND meeting can shave a whopping 3 years off your clinical development timeline:
For instance, for all new drugs approved between 2010 and 2012, the average clinical development time was more than 3 years faster when a pre-IND meeting was held than it was for drugs approved without a pre-IND meeting. 
For orphan drugs used to treat rare diseases, the development time for products with a pre-IND meeting was 6 years shorter on average or about half of what it was for those orphan drugs that did not have such a meeting.
That's it? A meeting? Cancel the massive CTMS integration – all we need are a couple tickets to DC?

Pariser's post appears to be an extension of an FDA presentation made at a joint NORD/DIA meeting last October. As far as I can tell, that presentation's not public, but it was covered by the Pink Sheet's Derrick Gingery on November 1.  That presentation covered just 2010 and 2011, and actually showed a 5 year benefit for drugs with pre-IND meetings (Pariser references 2010-2012).

Consider the fact that one VC-funded vendor** was recently spotted aggressively hyping the fact that its software reduced one trial’s timeline by 6 weeks. And here the FDA is telling us that a single sit-down saves an additional 150 weeks.

In addition, a second meeting – the End of Phase II meeting – saves another year, according to the NORD presentation.  Pariser does not include EOP2 data in her blog post.

So, time to charter a bus, load up the clinical and regulatory teams, and hit the road to Silver Spring?

Well, maybe. It probably couldn't hurt, and I'm sure it would be a great bonding experience, but there are some reasons to not take the numbers at face value.
  • We’re dealing with really small numbers here. The NORD presentation covers 54 drugs, and Pariser's appears to add 39 to that total. The fact that the time-savings data shifted so dramatically – from 5 years to 3 – tips us off to the fact that we probably have a lot of variance in the data. We also have no idea how many pre-IND meetings there were, so we don't know the relative sizes of the comparison groups.
  • It's a survivor-only data set. It doesn't include drugs that were terminated or rejected. FDA would never approve a clinical trial that only looked at patients who responded, then retroactively determined differences between them.  That approach is clearly susceptible to survivorship bias.
  • It reports means. This is especially a problem given the small numbers being studied. It's entirely plausible that just one or two drugs that took a really long time are badly skewing the results. Medians with quartile ranges would have been a lot more enlightening here.
All of the above make me question how big an impact this one meeting can really have. I'm sure it's a good thing, but it can't be quite this amazing, can it?

However, it would be great to see more of these metrics, produced in more detail, by the FDA. The agency does a pretty good job of reporting on its own performance – the PDUFA performance reports are a worthwhile read – but it doesn't publish much in the way of sponsor metrics. Given the constant clamor for new pathways and concessions from the FDA, it would be truly enlightening to see how well the industry is actually taking advantage of the tools it currently has.

As Gingery wrote in his article, "Data showing that the existing FDA processes, if used, can reduce development time is interesting given the strong effort by industry to create new methods to streamline the approval process." Gingery also notes that two new official sponsor-FDA meeting points have been added in the recently-passed FDASIA, so it would seem extremely worthwhile to have some ongoing, rigorous measurement of the usage of, and benefit from, these meetings.

Of course, even if these meetings are strongly associated with faster pipeline times, don’t be so sure that simply adding the meeting will cut your development so dramatically. Goodhart's Law tells us that performance metrics, when turned into targets, have a tendency to fail: in this case, whatever it was about the drug, or the drug company leadership, that prevented the meeting from happening in the first place may still prove to be the real factor in the delay.

I suppose the ultimate lesson here might be: If your drug doesn't have a pre-IND meeting because your executive management has the hubris to believe it doesn't need FDA input, then you probably need new executives more than you need a meeting.

[Image: Meeting pictured may not contain actual magic. Photo from FDA's Flikr stream.]

*  Disclosure: the author works for one of those.
** Under the theory that there is no such thing as bad publicity, no link will be provided.



Wednesday, February 6, 2013

Our New Glass House: GSK's Commitment to AllTrials

No stones, please.

Yesterday, Alec Gaffney was kind enough to ask my opinion on GSK's signing on to the AllTrials initiative to bring full publication of clinical trial data. Some of my comments made it into his thorough and excellent article on the topic. Today, it seems worthwhile to expand on those comments.

1. It was going to happen: if not now, then soon

As mentioned in the article, I – and I suspect a fair number of other people in the industry -- already thought that full CSR publication was inevitable.  In the last half of 2012, the EMA began moving very decisively in the direction of clinical trial results publication, but that's just the culmination of a long series of steps towards greater transparency in the drug development process. Starting with the establishment of the ClinicalTrials.gov registry in 1997, we have witnessed a near-continuous increase in requirements for public registration and reporting around clinical trials.

It's important to see the AllTrials campaign in this context. If AllTrials didn't exist, something very much like it would have come along. We had been moving in this direction already (the Declaration of Helsinki called for full publication 4 years before AllTrials even existed), and the time was ripe. In fact, the only thing that I personally found surprising about AllTrials is that it started in the UK, since over the past 15 years most of the advances in trial transparency had come from the US.

2. It's a good thing, but it's not earth-shattering

Practically speaking, releasing the full CSR probably won't have a substantial impact on everyday clinical practice by doctors. The real meat of the CSR that doctors care about has already been mandated on ClinicalTrials.gov – full results posting was required by FDAAA in 2008.

There seems to be pretty clear evidence that many (perhaps most) practicing physicians do not read the complete articles on clinical trials already, but rather gravitate to abstracts and summary tables. It is highly doubtful, therefore, that a high percentage of physicians will actually read through a series of multi-hundred-page documents to try to glean fresh nuances about the drugs they prescribe.

Presumably, we'll see synopsizing services arise to provide executive summaries of the CSR data, and these may turn out to be popular and well-used. However, again, most of the really important and interesting bits are going to be on ClinicalTrial.gov in convenient table form (well, sort-of convenient – I admit I sometimes have a fair bit of difficulty sifting through the data that’s already posted there).

3. The real question: Where will we go with patient-level data?

In terms of actual positive impact on clinical research, GSK's prior announcement last October – making full patient-level data available to researchers – was a much bigger deal. That opens up the data to all sorts of potential re-analyses, including more thorough looks at patient subpopulations.

Tellingly, no one else in pharma has followed suit yet. I expect we’ll see a few more major AllTrials signatories in fairly short order (and I certainly intend to vigorously encourage all of my clients to be among the first wave of signatories!), but I don’t know that we’ll see anyone offer up the complete data sets.  To me, that will be the trend to watch over the next 2-3 years.

[Image: Transparent abode courtesy of flikr user seier+seier.]

Tuesday, February 5, 2013

The World's Worst Coin Trick?


Ben Goldacre – whose Bad Pharma went on sale today – is fond of using a coin-toss-cheating analogy to describe the problem of "hidden" trials in pharmaceutical clinical research. He uses it in this TED talk:
If it's a coin-toss conspiracy, it's the worst
one in the history of conspiracies.
If I flipped a coin a hundred times, but then withheld the results from you from half of those tosses, I could make it look as if I had a coin that always came up heads. But that wouldn't mean that I had a two-headed coin; that would mean that I was a chancer, and you were an idiot for letting me get away with it. But this is exactly what we blindly tolerate in the whole of evidence-based medicine. 
and in this recent op-ed column in the New York Times:
If I toss a coin, but hide the result every time it comes up tails, it looks as if I always throw heads. You wouldn't tolerate that if we were choosing who should go first in a game of pocket billiards, but in medicine, it’s accepted as the norm. 
I can understand why he likes using this metaphor. It's a striking and concrete illustration of his claim that pharmaceutical companies are suppressing data from clinical trials in an effort to make ineffective drugs appear effective. It also dovetails elegantly, from a rhetorical standpoint, with his frequently-repeated claim that "half of all trials go unpublished" (the reader is left to make the connection, but presumably it's all the tail-flip trials, with negative results, that aren't published).

Like many great metaphors, however, this coin-scam metaphor has the distinct weakness of being completely disconnected from reality.

If we can cheat and hide bad results, why do we have so many public failures? Pharmaceutical headlines in the past year were mostly dominated by a series of high-profile clinical trial failures. Even drugs that showed great promise in phase 2 failed in phase 3 and were discontinued. Less than 20% of drugs that start up in human testing ever make it to market ... and by some accounts it may be less than 10%. Pfizer had a great run of approvals to end 2012, with 4 new drugs approved by the FDA (including Xalkori, the exciting targeted therapy for lung cancer). And yet during that same period, the company discontinued 8 compounds.

Now, this wasn't always the case. Mandatory public registration of all pharma trials didn't begin in the US until 2005, and mandatory public results reporting came later than that. Before then, companies certainly had more leeway to keep results to themselves, with one important exception: the FDA still had the data. If you ran 4 phase 3 trials on a drug, and only 2 of them were positive, you might be able to only publish those 2, but when it came time to bring the drug to market, the regulators who reviewed your NDA report would be looking at the totality of evidence – all 4 trials. And in all likelihood you were going to be rejected.

That was definitely not an ideal situation, but even then it wasn't half as dire as Goldacre's Coin Toss would lead you to believe. The cases of ineffective drugs reaching the US market are extremely rare: if anything, FDA has historically been criticized for being too risk-averse and preventing drugs with only modest efficacy from being approved.

Things are even better now. There are no hidden trials, the degree of rigor (in terms of randomization, blinding, and analysis) has ratcheted up consistently over the last two decades, lots more safety data gets collected along the way, and phase 4 trials are actually being executed and reported in a timely manner. In fact, it is safe to say that medical research has never been as thorough and rigorous as it is today.

That doesn't mean we can’t get better. We can. But the main reason we can is that we got on the path to getting better 20 years ago, and continue to make improvements.

Buying into Goldacre's analogy requires you to completely ignore a massive flood of public evidence to the contrary. That may work for the average TED audience, but it shouldn't be acceptable at the level of rational public discussion.

Of course, Goldacre knows that negative trials are publicized all the time. His point is about publication bias. However, when he makes his point so broadly as to mislead those who are not directly involved in the R&D process, he has clearly stepped out of the realm of thoughtful and valid criticism.

I got my pre-ordered copy of Bad Pharma this morning, and look forward to reading it. I will post some additional thoughts on the book as I get through it. In the meantime,those looking for more can find a good skeptical review of some of Goldacre's data on the Dianthus Medical blog here and here.

[Image: Bad Pharma's Bad Coin courtesy of flikr user timparkinson.]

Friday, October 12, 2012

The "Scandal" of "Untested" Generics


I am in the process of writing up a review of this rather terrible Forbes piece on the FDA recall of one manufacturer's version of generic 300 mg bupropion XL. However, that's going to take a while, so I thought I'd quickly cover just one of the points brought up there, since it seems to be causing a lot of confusion.

Forbes is shocked, SHOCKED to learn that things
 are happening the same way they always have:
call Congress at once!
The FDA’s review of the recall notes that when the generic was approved, only the 150 mg version was tested for bioequivalence in humans. The 300 mg version was approved based upon the 150 mg data as well as detailed information about the manufacturing and composition of both versions.

A number of people expressed surprise about this – they seemed to genuinely not be aware that a drug approval could happen in this way. The Forbes article stated that this was entirely inappropriate and worthy of Congressional investigation.

In fact, many strengths of generic drugs do not undergo in vivo bioequivalence and bioavailability testing as part of their review and approval. This is true in both the US and Europe. Here is a brief rundown of when and why such testing is waived, and why such waivers are neither new, nor shocking, nor unethical.

Title 21, Part 320 of the US Code of Federal Regulations is the regulatory foundation regarding bioequivalence testing in drugs.  Section 22 deals specifically with conditions where human testing should be waived. It is important to note that these regulations aren't new, and the laws that they're based on aren't new either (in fact, the federal law is 20 years old, and was last updated 10 years ago).

By far the most common waiver is for lower dosage strengths. When a drug exists in many approved dosages, generally the highest dose is subject to human bioequivalence testing and the lower doses are approved based on the high-dose results supplemented by in vitro testing.

However, when higher doses carry risks of toxicity, the situation can be reversed, out of ethical concerns for the welfare of test subjects. So, for example, current FDA guidance for amiodarone – a powerful antiarrhythmic drug with lots of side effects – is that the maximum “safe” dose of 200 mg should be tested in humans, and that 100 mg, 300 mg, and 400 mg dosage formulations will be approved if the manufacturer also establishes “acceptable in-vitro dissolution testing of all strengths, and … proportional similarity of the formulations across all strengths”.

That last part is critically important: the generic manufacturer must submit additional evidence about how the doses work in vitro, as well as keep the proportions of inactive ingredients constant. It is this combination of in vivo bioequivalence, in vitro testing, and manufacturing controls that supports a sound scientific decision to approve the generic at various doses.

In fact, certain drugs are so toxic – most chemotherapies, for example – that performing a bioequivalence test in healthy humans in patently unethical. In many of those cases, generic approval is granted on the basis of formulation chemistry alone. For example, generic paclitaxel is waived from human testing (here is a waiver from 2001 – again demonstrating that there’s nothing terribly shocking or new about this process).

In the case of bupropion, FDA had significant concerns about the risk of seizures at the 300 mg dose level. Similar to the amiodarone example above, they issued guidance providing for a waiver of the higher dosage, but only based upon the combination of in vivo data from the 150 mg dose, in vitro testing, and manufacturing controls.

You may not agree with the current system, and there may be room for improvement, but you cannot claim that it is new, unusual, or requiring congressional inquiry. It’s based on federal law, with significant scientific and ethical underpinnings.

Further reading: FDA Guidance for Industry: Bioavailability and Bioequivalence Studies for Orally Administered Drug Products — General Considerations

Monday, August 27, 2012

"Guinea Pigs" on CBS is Going to be Super Great, I Can Just Tell


An open letter to Mad Men producer/writer Dahvi Waller

Dear Dahvi,

I just wanted to drop you a quick note of congratulations when I heard through the grapevine that CBS has signed you on to do a pilot episode of your new medical drama, Guinea Pigs (well actually, I heard it from the Hollywood Reporter; the grapevine doesn’t tell me squat). According to the news item,
The drama centers on group of trailblazing doctors who run clinical trials at a hospital in Philadelphia. The twist: The trials are risky, and the guinea pigs are human.
Probably just like this, but
with a bigger body count.
(Sidenote: that’s quite the twist there! For a minute, I thought this was going to be the first ever rodent-based prime time series!)

I don’t want to take up too much of your time. I’m sure you’re extremely busy with lots of critical casting decisions, like: will the Evil Big Pharma character be a blonde, beautiful-but-treacherous Ice Queen type in her early 30’s, or an expensively-suited, handsome-but-treacherous Gordon Gekko type in his early 60’s? (My advice: Don’t settle!  Use both! Viewers of all ages can love to hate the pharmaceutical industry!)

About that name, by the way: great choice! I’m really glad you didn’t overthink that one. A good writer should go with her gut and pick the first easy stereotype that pops into her head. (Because the head is never closer to the gut then when it’s jammed firmly up … but I don’t have to explain anatomy to you! You write a medical drama for television!)

I’m sure the couple-three million Americans who enroll in clinical trials each year will totally relate to your calling them guinea pigs. In our industry, we call them heroes, but that’s just corny, right? Real heroes on TV are people with magic powers, not people who contribute to the advancement of medicine.

Anyway, I’m just really excited because our industry is just so, well … boring! We’re so fixated on data collection regulations and safety monitoring and ethics committee reviews and yada yada yada – ugh! Did you know we waste 5 to 10 years on this stuff, painstakingly bringing drugs through multiple graduated phases of testing in order to produce a mountain of data (sometimes running over 100,000 pages long) for the FDA to review?

Dahvi Waller: bringing CSI
to clinical research
I’m sure you’ll be giving us the full CSI-meets-Constant-Gardener treatment, though, and it will all seem so incredibly easy that your viewers will wonder what the hell is taking us so long to make these great new medicines. (Good mid-season plot point: we have the cure for most diseases already, but they’ve been suppressed by a massive conspiracy of sleazy corporations, corrupt politicians, and inept bureaucrats!)

Anyway, best of luck to you! I can't wait to see how accurately and respectfully you treat the work of the research biologists and chemists, physician investigators, nurses, study coordinators, monitors, reviewers, auditors, and patient volunteers guinea pigs who are working hard to ensure the next generation of medicines are safe and effective.  What can go wrong? It's television!




Monday, August 13, 2012

Most* Clinical Trials Are Too** Small

* for some value of "most"
** for some value of "too"


[Note: this is a companion to a previous post, Clouding the Debate on Clinical Trials: Pediatric Edition.]

Are many current clinical trials underpowered? That is, will they not enroll enough patients to adequately answer the research question they were designed to answer? Are we wasting time and money – and even worse, the time and effort of researchers and patient-volunteers – by conducting research that is essentially doomed to produce clinically useless results?

That is the alarming upshot of the coverage on a recent study published in the Journal of the American Medical Association. This Duke Medicine News article was the most damning in its denunciation of the current state of clinical research:
Duke: Mega-Trial experts concerned
that not enough trials are mega-trials
Large-Scale Analysis Finds Majority of Clinical Trials Don't Provide Meaningful Evidence

The largest comprehensive analysis of ClinicalTrials.gov finds that clinical trials are falling short of producing high-quality evidence needed to guide medical decision-making.
The study was also was also covered in many industry publications, as well as the mainstream news. Those stories were less sweeping in their indictment of the "clinical trial enterprise", but carried the same main theme: that an "analysis" had determined that most current clinical trial were "too small".

I have only one quibble with this coverage: the study in question didn’t demonstrate any of these points. At all.

The study is a simple listing of gross characteristics of interventional trials registered over a 6 year period. It is entirely descriptive, and limits itself entirely to data entered by the trial sponsor as part of the registration on ClinicalTrials.gov. It contains no information on the quality of the trials themselves.

That last part can’t be emphasized enough: the study contains no quality benchmarks. No analysis of trial design. No benchmarking of the completeness or accuracy of the data collected. No assessment of the clinical utility of the evidence produced. Nothing like that at all.

So, the question that nags at me is: how did we get from A to B? How did this mildly-interesting-and-entirely-descriptive data listing transform into a wholesale (and entirely inaccurate) denunciation of clinical research?

For starters, the JAMA authors divide registered trials into 3 enrollment groups: 1-100, 101-1000, and >1000. I suppose this is fine, although it should be noted that it is entirely arbitrary – there is no particular reason to divide things up this way, except perhaps a fondness for neat round numbers.

Trials within the first group are then labeled "small". No effort is made to explain why 100 patients represents a clinically important break point, but the authors feel confident to conclude that clinical research is "dominated by small clinical trials", because 62% of registered trials fit into this newly-invented category. From there, all you need is a completely vague yet ominous quote from the lead author. As US News put it:
The new report says 62 percent of the trials from 2007-2010 were small, with 100 or fewer participants. Only 4 percent had more than 1,000 participants.

"There are 330 new clinical trials being registered every week, and a number of them are very small and probably not as high quality as they could be," [lead author Dr Robert] Califf said.
"Probably not as high quality as they could be", while just vague enough to be unfalsifiable, is also not at all a consequence of the data as reported. So, through a chain of arbitrary decisions and innuendo, "less than 100" becomes "small" becomes "too small" becomes "of low quality".

Califf’s institution, Duke, appears to be particularly guilty of driving this evidence-free overinterpretation of the data, as seen in the sensationalistic headline and lede quoted above. However, it’s clear that Califf himself is blurring the distinction between what his study showed and what it didn’t:
"Analysis of the entire portfolio will enable the many entities in the clinical trials enterprise to examine their practices in comparison with others," says Califf. "For example, 96 percent of clinical trials have ≤1000 participants, and 62 percent have ≤ 100. While there are many excellent small clinical trials, these studies will not be able to inform patients, doctors, and consumers about the choices they must make to prevent and treat disease."
Maybe he’s right that these small studies will not be able to inform patients and doctors, but his study has provided absolutely no support for that statement.

When we build a protocol, there are actually only 3 major factors that go into determining how many patients we want to enroll:
  1. How big a difference we estimate the intervention will have compared to a control (the effect size)
  2. How much risk we’ll accept that we’ll get a false-positive (alpha) or false-negative (beta) result
  3. Occasionally, whether we need to add participants to better characterize safety and tolerability (as is frequently, and quite reasonably, requested by FDA and other regulators)
Quantity is not quality: enrolling too many participants in an investigational trial is unethical and a waste of resources. If the numbers determine that we should randomize 80 patients, it would make absolutely no sense to randomize 21 more so that the trial is no longer "too small". Those 21 participants could be enrolled in another trial, to answer another worthwhile question.

So the answer to "how big should a trial be?" is "exactly as big as it needs to be." Taking descriptive statistics and applying normative categories to them is unhelpful, and does not make for better research policy.


ResearchBlogging.org Califf RM, Zarin DA, Kramer JM, Sherman RE, Aberle LH, & Tasneem A (2012). Characteristics of clinical trials registered in ClinicalTrials.gov, 2007-2010. JAMA : the journal of the American Medical Association, 307 (17), 1838-47 PMID: 22550198