Tuesday, September 3, 2013

Every Unhappy PREA Study is Unhappy in its Own Way

“Children are not small adults.” We invoke this saying, in a vague and hand-wavy manner, whenever we talk about the need to study drugs in pediatric populations. It’s an interesting idea, but it really cries out for further elaboration. If they’re not small adults, what are they? Are pediatric efficacy and safety totally uncorrelated with adult efficacy and safety? Or are children actually kind of like small adults in certain important ways?

Pediatric post-marketing studies have been completed for over 200 compounds in the years since BPCA (2002, offering a reward of 6 months extra market exclusivity/patent life to any drug conducting requested pediatric studies) and PREA (2007, giving FDA power to require pediatric studies) were enacted. I think it is fair to say that at this point, it would be nice to have some sort of comprehensive idea of how FDA views the risks associated with treating children with medications tested only on adults. Are they in general less efficacious? More? Is PK in children predictable from adult studies a reasonable percentage of the time, or does it need to be recharacterized with every drug?

Essentially, my point is that BPCA/PREA is a pretty crude tool: it is both too broad in setting what is basically a single standard for all new adult medications, and too vague as to what exactly that standard is.

In fact, a 2008 published review from FDA staffers and a 2012 Institute of Medicine report both show one clear trend: in a significant majority of cases, pediatric studies resulted in validating the adult medication in children, mostly with predictable dose and formulation adjustments (77 of 108 compounds (71%) in the FDA review, and 27 of 45 (60%) in the IOM review, had label changes that simply reflected that use of the drug was acceptable in younger patients).

So, it seems, most of the time, children are in fact not terribly unlike small adults.

But it’s also true that the percentages of studies that show lack of efficacy, or bring to light a new safety issue with the drug’s use in children, is well above zero. There is some extremely important information here.

To paraphrase John Wanamaker: we know that half our PREA studies are a waste of time; we just don’t know which half.

This would seem to me to be the highest regulatory priority – to be able to predict which new drugs will work as expected in children, and which may truly require further study. After a couple hundred compounds have gone through this process, we really ought to be better positioned to understand how certain pharmacological properties might increase or decrease the risks of drugs behaving differently than expected in children. Unfortunately, neither the FDA nor the IOM papers venture any hypotheses about this – both end up providing long lists of examples of certain points, but not providing any explanatory mechanisms that might enable us to engage in some predictive risk assessment.

While FDASIA did not advance PREA in terms of more rigorously defining the scope of pediatric requirements (or, better yet, requiring FDA to do so), it did address one lingering concern by requiring that FDA publish non-compliance letters for sponsors that do not meet their commitments. (PREA, like FDAAA, is a bit plagued by lingering suspicions that it’s widely ignored by industry.)

The first batch of letters and responses has been published, and it offers some early insights into the problems engendered by the nebulous nature of PREA and its implementation.

These examples, unfortunately, are still a bit opaque – we will need to wait on the FDA responses to the sponsors to see if some of the counter-claims are deemed credible. In addition, there are a few references to prior deferral requests, but the details of the request (and rationales for the subsequent FDA denials) do not appear to be publicly available. You can read FDA’s take on the new postings on their blog, or in the predictably excellent coverage from Alec Gaffney at RAPS.

Looking through the first 4 drugs publicly identified for noncompliance, the clear trend is that there is no trend. All these PREA requirements have been missed for dramatically different reasons.

Here’s a quick rundown of the drugs at issue – and, more interestingly, the sponsor responses:

1. Renvela - Genzyme (full response)

Genzyme appears to be laying responsibility for the delay firmly at FDA’s feet here, basically claiming that FDA continued to pile on new requirements over time:
Genzyme’s correspondence with the FDA regarding pediatric plans and design of this study began in 2006 and included a face to face meeting with FDA in May 2009. Genzyme submitted 8 revisions of the pediatric study design based on feedback from FDA including that received in 4 General Advice Letters. The Advice Letter dated February 17, 2011  contained further recommendations on the study design, yet still required the final clinical study report  by December 31, 2011.
This highlights one of PREA’s real problems: the requirements as specified in most drug approval letters are not specific enough to fully dictate the study protocol. Instead, there is a lot of back and forth between the sponsor and FDA, and it seems that FDA does not always fully account for their own contribution to delays in getting studies started.

2. Hectorol - Genzyme (full response)

In this one, Genzyme blames the FDA not for too much feedback, but for none at all:
On December 22, 2010, Genzyme submitted a revised pediatric development plan (Serial No. 212) which was intended to address FDA feedback and concerns that had been received to date. This submission included proposed protocol HECT05310. [...] At this time, Genzyme has not received feedback from the FDA on the protocol included in the December 22, 2010 submission.
If this is true, it appears extremely embarrassing for FDA. Have they really not provided feedback in over 2.5 years, and yet still sending noncompliance letters to the sponsor? It will be very interesting to see an FDA response to this.

3. Cleviprex – The Medicines Company (full response)

This is the only case where the pharma company appears to be clearly trying to game the system a bit. According to their response:
Recognizing that, due to circumstances beyond the company’s control, the pediatric assessment could not be completed by the due date, The Medicines Company notified FDA in September 2010, and sought an extension. At that time, it was FDA’s view that no extensions were available. Following the passage of FDASIA, which specifically authorizes deferral extensions, the company again sought a deferral extension in December 2012. 
So, after hearing that they had to move forward in 2010, the company promptly waited 2 years to ask for another extension. During that time, the letter seems to imply that they did not try to move the study forward at all, preferring to roll the dice and wait for changing laws to help them get out from under the obligation.

4. Twinject/Adrenaclick – Amedra (full response)

The details of this one are heavily redacted, but it may also be a bit of gamesmanship from the sponsor. After purchasing the injectors, Amedra asked for a deferral. When the deferral was denied, they simply asked for the requirements to be waived altogether. That seems backwards, but perhaps there's a good reason for that.

---

Clearly, 4 drugs is not a sufficient sample to say anything definitive, especially when we don't have FDA's take on the sponsor responses. However, it is interesting that these 4 cases seem to reflect an overall pattern with BCPA and PREA - results are scattershot and anecdotal. We could all clearly benefit from a more systematic assessment of why these trials work and why some of them don't, with a goal of someday soon abandoning one-size-fits-all regulation and focusing resources where they will do the most good.

Wednesday, August 7, 2013

Counterfeit Drugs in Clinical Trials?

This morning I ran across a bit of a coffee-spitter: in the middle of an otherwise opaquely underinformative press release fromTranscelerate Biopharma about the launch of their
Counterfeits flooding
the market? Really?
"Comparator Network" - which will perhaps streamline member companies' ability to obtain drugs from each other for clinical trials using active comparator arms -  the CEO of the consortium, Dalvir Gill, drops a rather remarkable quote:

"Locating and accessing these comparators at the right time, in the right quantities and with the accompanying drug stability and regulatory information we need, doesn't always happen efficiently. This is further complicated by infiltration of the commercial drug supply chain by counterfeit drugs.  With the activation of our Comparator Network the participating TransCelerate companies will be able to source these comparator drugs directly from each other, be able to secure supply when they need it in the quantities they need, have access to drug data and totally mitigate the risk of counterfeit drugs in that clinical trial."

[Emphasis added.]

I have to admit to being a little floored by the idea that there is any sort of risk, in industry-run clinical trials, of counterfeit medication "infiltration".

Does Gill know something that the rest of us don't? Or is this just an awkward slap at perceived competition – innuendo against the companies that currently manage clinical trial comparator drug supply? Or an attempt at depicting the trials of non-Transcelerate members as risky and prone to fraud?

Either way, it could use some explaining. Thinking I might have missed something, I did do a quick literature search to see if I could come across any references to counterfeits in trials. Google Scholar and PubMed produced no useful results, but Wikipedia helpfully noted in its entry on counterfeit medications:

Counterfeit drugs have even been known to have been involved in clinical drug trials.[citation needed]


And on that point, I think we can agree: Citation needed. I hope the folks at Transcelerate will oblige.

Wednesday, July 31, 2013

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


ResearchBlogging.org Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577