Showing posts with label pediatric trials. Show all posts
Showing posts with label pediatric trials. Show all posts

Tuesday, September 3, 2013

Every Unhappy PREA Study is Unhappy in its Own Way

“Children are not small adults.” We invoke this saying, in a vague and hand-wavy manner, whenever we talk about the need to study drugs in pediatric populations. It’s an interesting idea, but it really cries out for further elaboration. If they’re not small adults, what are they? Are pediatric efficacy and safety totally uncorrelated with adult efficacy and safety? Or are children actually kind of like small adults in certain important ways?

Pediatric post-marketing studies have been completed for over 200 compounds in the years since BPCA (2002, offering a reward of 6 months extra market exclusivity/patent life to any drug conducting requested pediatric studies) and PREA (2007, giving FDA power to require pediatric studies) were enacted. I think it is fair to say that at this point, it would be nice to have some sort of comprehensive idea of how FDA views the risks associated with treating children with medications tested only on adults. Are they in general less efficacious? More? Is PK in children predictable from adult studies a reasonable percentage of the time, or does it need to be recharacterized with every drug?

Essentially, my point is that BPCA/PREA is a pretty crude tool: it is both too broad in setting what is basically a single standard for all new adult medications, and too vague as to what exactly that standard is.

In fact, a 2008 published review from FDA staffers and a 2012 Institute of Medicine report both show one clear trend: in a significant majority of cases, pediatric studies resulted in validating the adult medication in children, mostly with predictable dose and formulation adjustments (77 of 108 compounds (71%) in the FDA review, and 27 of 45 (60%) in the IOM review, had label changes that simply reflected that use of the drug was acceptable in younger patients).

So, it seems, most of the time, children are in fact not terribly unlike small adults.

But it’s also true that the percentages of studies that show lack of efficacy, or bring to light a new safety issue with the drug’s use in children, is well above zero. There is some extremely important information here.

To paraphrase John Wanamaker: we know that half our PREA studies are a waste of time; we just don’t know which half.

This would seem to me to be the highest regulatory priority – to be able to predict which new drugs will work as expected in children, and which may truly require further study. After a couple hundred compounds have gone through this process, we really ought to be better positioned to understand how certain pharmacological properties might increase or decrease the risks of drugs behaving differently than expected in children. Unfortunately, neither the FDA nor the IOM papers venture any hypotheses about this – both end up providing long lists of examples of certain points, but not providing any explanatory mechanisms that might enable us to engage in some predictive risk assessment.

While FDASIA did not advance PREA in terms of more rigorously defining the scope of pediatric requirements (or, better yet, requiring FDA to do so), it did address one lingering concern by requiring that FDA publish non-compliance letters for sponsors that do not meet their commitments. (PREA, like FDAAA, is a bit plagued by lingering suspicions that it’s widely ignored by industry.)

The first batch of letters and responses has been published, and it offers some early insights into the problems engendered by the nebulous nature of PREA and its implementation.

These examples, unfortunately, are still a bit opaque – we will need to wait on the FDA responses to the sponsors to see if some of the counter-claims are deemed credible. In addition, there are a few references to prior deferral requests, but the details of the request (and rationales for the subsequent FDA denials) do not appear to be publicly available. You can read FDA’s take on the new postings on their blog, or in the predictably excellent coverage from Alec Gaffney at RAPS.

Looking through the first 4 drugs publicly identified for noncompliance, the clear trend is that there is no trend. All these PREA requirements have been missed for dramatically different reasons.

Here’s a quick rundown of the drugs at issue – and, more interestingly, the sponsor responses:

1. Renvela - Genzyme (full response)

Genzyme appears to be laying responsibility for the delay firmly at FDA’s feet here, basically claiming that FDA continued to pile on new requirements over time:
Genzyme’s correspondence with the FDA regarding pediatric plans and design of this study began in 2006 and included a face to face meeting with FDA in May 2009. Genzyme submitted 8 revisions of the pediatric study design based on feedback from FDA including that received in 4 General Advice Letters. The Advice Letter dated February 17, 2011  contained further recommendations on the study design, yet still required the final clinical study report  by December 31, 2011.
This highlights one of PREA’s real problems: the requirements as specified in most drug approval letters are not specific enough to fully dictate the study protocol. Instead, there is a lot of back and forth between the sponsor and FDA, and it seems that FDA does not always fully account for their own contribution to delays in getting studies started.

2. Hectorol - Genzyme (full response)

In this one, Genzyme blames the FDA not for too much feedback, but for none at all:
On December 22, 2010, Genzyme submitted a revised pediatric development plan (Serial No. 212) which was intended to address FDA feedback and concerns that had been received to date. This submission included proposed protocol HECT05310. [...] At this time, Genzyme has not received feedback from the FDA on the protocol included in the December 22, 2010 submission.
If this is true, it appears extremely embarrassing for FDA. Have they really not provided feedback in over 2.5 years, and yet still sending noncompliance letters to the sponsor? It will be very interesting to see an FDA response to this.

3. Cleviprex – The Medicines Company (full response)

This is the only case where the pharma company appears to be clearly trying to game the system a bit. According to their response:
Recognizing that, due to circumstances beyond the company’s control, the pediatric assessment could not be completed by the due date, The Medicines Company notified FDA in September 2010, and sought an extension. At that time, it was FDA’s view that no extensions were available. Following the passage of FDASIA, which specifically authorizes deferral extensions, the company again sought a deferral extension in December 2012. 
So, after hearing that they had to move forward in 2010, the company promptly waited 2 years to ask for another extension. During that time, the letter seems to imply that they did not try to move the study forward at all, preferring to roll the dice and wait for changing laws to help them get out from under the obligation.

4. Twinject/Adrenaclick – Amedra (full response)

The details of this one are heavily redacted, but it may also be a bit of gamesmanship from the sponsor. After purchasing the injectors, Amedra asked for a deferral. When the deferral was denied, they simply asked for the requirements to be waived altogether. That seems backwards, but perhaps there's a good reason for that.

---

Clearly, 4 drugs is not a sufficient sample to say anything definitive, especially when we don't have FDA's take on the sponsor responses. However, it is interesting that these 4 cases seem to reflect an overall pattern with BCPA and PREA - results are scattershot and anecdotal. We could all clearly benefit from a more systematic assessment of why these trials work and why some of them don't, with a goal of someday soon abandoning one-size-fits-all regulation and focusing resources where they will do the most good.

Wednesday, June 19, 2013

Pediatric Trial Enrollment (Shameless DIA Self-Promotion, Part 1)


[Fair Warning: I have generally tried to keep this blog separate from my corporate existence, but am making an exception for two quick posts about the upcoming DIA 2013 Annual Meeting.]

Improving Enrollment in Pediatric Clinical Trials


Logistically, ethically, and emotionally, involving children in medical research is greatly different from the same research in adults. Some of the toughest clinical trials I've worked on, across a number of therapeutic areas, have been pediatric ones. They challenge you to come up with different approaches to introducing and explaining clinical research – approaches that have to work for doctors, kids, and parents simultaneously.

On Thursday June 27, Don Sickler, one of my team members, will be chairing a session titled “Parents as Partners: Engaging Caregivers for Pediatric Trials”. It should be a good session.

Joining Don are 2 people I've had the pleasure of working with in the past. Both of them combine strong knowledge of clinical research with a massive amount of positive energy and enthusiasm (no doubt a big part of what makes them successful).

However, they also differ in one key aspect: what they work on. One of them – Tristen Moors from Hyperion Therapeutics - works on an ultra-rare condition, Urea Cycle Disorder, a disease affecting only a few hundred children every year. On the other hand, Dr. Ann Edmunds is an ENT working in a thriving private practice. I met her because she was consistently the top enroller in a number of trials relating to tympanostomy tube insertion. Surgery to place “t-tubes” is one of the most common and routine outpatients surgeries there is, with an estimated half million kids getting tubes each year.

Each presents a special challenge: for rare conditions, how do you even find enough patients? For routine procedures, how do you convince parents to complicate their (and their children’s) lives by signing up for a multi-visit, multi-procedure trial?

Ann and Tristen have spent a lot of time tackling these issues, and should have some great advice to give.

For more information on the session, here’s Don’s posting on our news blog.

Tuesday, July 31, 2012

Clouding the Debate on Clinical Trials: Pediatric Edition

I would like to propose a rule for clinical trial benchmarks. This rule may appear so blindingly obvious that I run the risk of seeming simple-minded and naïve for even bringing it up.

The rule is this: if you’re going to introduce a benchmark for clinical trial design or conduct, explain its value.

Are we not putting enough resources into pediatric research, or have we over-incentivized risky experimentation on a vulnerable population?  This is a critically important question in desperate need of more data and thoughtful analysis.
That’s it.  Just a paragraph explaining the rationale of why you’ve chosen to measure what you’re measuring.  Extra credit if you compare it to other benchmarks you could have used, or consider the limitations of your new metric.

I would feel bad for bringing this up, were it not for two recent articles in major publications that completely fail to live up to this standard. I’ll cover one today and one tomorrow.

The first is a recent article in Pediatrics, Pediatric Versus Adult Drug Trials for Conditions With High Pediatric Disease Burden, which has received a fair bit of attention in the industry -- mostly due to Reuters uncritically recycling the authors’ press release

It’s worth noting that the claim made in the release title, "Drug safety and efficacy in children is rarely addressed in drug trials for major diseases", is not at all supported by any data in the study itself. However, I suppose I can live with misleading PR.  What is frustrating is the inadequacy of the measures the authors use in the actual study, and the complete lack of discussion about them.

To benchmark where pediatric drug research should be, they use the proportion of total "burden of disease" borne by children.   Using WHO estimates, they look at the ratio of burden (measured, essentially, in years of total disability) between children and adults.  This burden is further divided into high-income countries and low/middle-income countries.

This has some surface plausibility, but presents a host of issues.  Simply looking at the relative prevalence of a condition does not really give us any insights into what we need to study about treatment.  For example: number 2 on the list for middle/low income diseases is diarrheal illness, where WHO lists the burden of disease as 90% pediatric.  There is no question that diarrheal diseases take a terrible toll on children in developing countries.  We absolutely need to focus resources on improving prevention and treatment: what we do not particularly need is more clinical trials.  As the very first bullet on the WHO fact sheet points out, diarrheal diseases are preventable and treatable.  Prevention is mostly about improving the quality of water and food supplies – this is vitally important stuff, but it has nothing to do with pharmaceutical R&D.

In the US, the NIH’s National Institute for Child Health and Human Development (NICHD) has a rigorous process for identifying and prioritizing needs for pediatric drug development, as mandated by the BPCA.  It is worth noting that only 2 of the top 5 diseases in the Pediatrics article make the cut among the 41 highest-priority areas in the NICHD’s list for 2011.

(I don’t even think the numbers as calculated by the authors are even convincing on their own terms:  3 of the 5 "high burden" diseases in wealthy countries – bipolar, depression, and schizophrenia – are extremely rare in very young children, and only make this list because of their increasing incidence in adolescence.  If our objective is to focus on how these drugs may work differently in developing children, then why wouldn’t we put greater emphasis on the youngest cohorts?)

Of course, just because a new benchmark is at odds with other benchmarks doesn’t necessarily mean that it’s wrong.  But it does mean that the benchmark requires some rigorous vetting before its used.  The authors make no attempt at explaining why we should use their metric, except to say it’s "apt". The only support provided is a pair of footnotes – one of those, ironically, is to this article from 1999 that contains a direct warning against their approach:
Our data demonstrate how policy makers could be misled by using a single measure of the burden of disease, because the ranking of diseases according to their burden varies with the different measures used.
If we’re going to make any progress in solving the problems in drug development – and I think we have a number of problems that need solving – we have got to start raising our standards for our own metrics.

Are we not putting enough resources into pediatric research, or have we over-incentivized risky experimentation on a vulnerable population? This is a critically important question in desperate need of more data and thoughtful analysis. Unfortunately, this study adds more noise than insight to the debate.

Tomorrow In a couple weeks, I’ll cover the allegations about too many trials being too small. [Update: "tomorrow" took a little longer than expected. Follow up post is here.]

[Note: the Pediatrics article also uses another metric, "Percentage of Trials that Are Pediatric", that is used as a proxy for amount of research effort being done.  For space reasons, I’m not going to go into that one, but it’s every bit as unhelpful as the pediatric burden metric.]

ResearchBlogging.org Bourgeois FT, Murthy S, Pinto C, Olson KL, Ioannidis JP, & Mandl KD (2012). Pediatric Versus Adult Drug Trials for Conditions With High Pediatric Disease Burden. Pediatrics PMID: 22826574