Wednesday, July 31, 2013

Brazen Scofflaws? Are Pharma Companies Really Completely Ignoring FDAAA?

Results reporting requirements are pretty clear. Maybe critics should re-check their methods?

Ben Goldacre has rather famously described the clinical trial reporting requirements in the Food and Drug Administration Amendments Act of 2007 as a “fake fix” that was being thoroughly “ignored” by the pharmaceutical industry.

Pharma: breaking the law in broad daylight?
He makes this sweeping, unconditional proclamation about the industry and its regulators on the basis of  a single study in the BMJ, blithely ignoring the fact that a) the authors of the study admitted that they could not adequately determine the number of studies that were meeting FDAAA requirements and b) a subsequent FDA review that identified only 15 trials potentially out of compliance, out of a pool of thousands.


Despite the fact that the FDA, which has access to more data, says that only a tiny fraction of studies are potentially noncompliant, Goldacre's frequently repeated claims that the law is being ignored seems to have caught on in the general run of journalistic and academic discussions about FDAAA.

And now there appears to be additional support for the idea that a large percentage of studies are noncompliant with FDAAA results reporting requirements, in the form of a new study in the Journal of Clinical Oncology: "Public Availability of Results of Trials Assessing Cancer Drugs in the United States" by Thi-Anh-Hoa Nguyen, et al.. In it, the authors report even lower levels of FDAAA compliance – a mere 20% of randomized clinical trials met requirements of posting results on clinicaltrials.gov within one year.

Unsurprisingly, the JCO results were immediately picked up and circulated uncritically by the usual suspects.

I have to admit not knowing much about pure academic and cooperative group trial operations, but I do know a lot about industry-run trials – simply put, I find the data as presented in the JCO study impossible to believe. Everyone I work with in pharma trials is painfully aware of the regulatory environment they work in. FDAAA compliance is a given, a no-brainer: large internal legal and compliance teams are everywhere, ensuring that the letter of the law is followed in clinical trial conduct. If anything, pharma sponsors are twitchily over-compliant with these kinds of regulations (for example, most still adhere to 100% verification of source documentation – sending monitors to physically examine every single record of every single enrolled patient - even after the FDA explicitly told them they didn't have to).

I realize that’s anecdotal evidence, but when such behavior is so pervasive, it’s difficult to buy into data that says it’s not happening at all. The idea that all pharmaceutical companies are ignoring a highly visible law that’s been on the books for 6 years is extraordinary. Are they really so brazenly breaking the rules? And is FDA abetting them by disseminating incorrect information?

Those are extraordinary claims, and would seem to require extraordinary evidence. The BMJ study had clear limitations that make its implications entirely unclear. Is the JCO article any better?

Some Issues


In fact, there appear to be at least two major issues that may have seriously compromised the JCO findings:

1. Studies that were certified as being eligible for delayed reporting requirements, but do not have their certification date listed.

The study authors make what I believe to be a completely unwarranted assumption:

In trials for approval of new drugs or approval for a new indication, a certification [permitting delayed results reporting] should be posted within 1 year and should be publicly available.

It’s unclear to me why the authors think the certifications “should be” publicly available. In re-reading FDAAA section 801, I don’t see any reference to that being a requirement. I suppose I could have missed it, but the authors provide a citation to a page that clearly does not list any such requirement.

But their methodology assumes that all trials that have a certification will have it posted:

If no results were posted at ClinicalTrials.gov, we determined whether the responsible party submitted a certification. In this case, we recorded the date of submission of the certification to ClinicalTrials.gov.

If a sponsor gets approval from FDA to delay reporting (as is routine for all drugs that are either not approved for any indication, or being studied for a new indication – i.e., the overwhelming majority of pharma drug trials), but doesn't post that approval on the registry, the JCO authors deem that trial “noncompliant”. This is not warranted: the company may have simply chosen not to post the certification despite being entirely FDAAA compliant.

2. Studies that were previously certified for delayed reporting and subsequently reported results

It is hard to tell how the authors treated this rather-substantial category of trials. If a trial was certified for delayed results reporting, but then subsequently published results, the certification date becomes difficult to find. Indeed, it appears in the case where there were results, the authors simply looked at the time from study completion to results posting. In effect, this would re-classify almost every single one of these trials from compliant to non-compliant. Consider this example trial:


  • Phase 3 trial completes January 2010
  • Certification of delayed results obtained December 2010 (compliant)
  • FDA approval June 2013
  • Results posted July 2013 (compliant)


In looking at the JCO paper's methods section, it really appears that this trial would be classified as reporting results 3.5 years after completion, and therefore be considered noncompliant with FDAAA. In fact, this trial is entirely kosher, and would be extremely typical for many phase 2 and 3 trials in industry.

Time for Some Data Transparency


The above two concerns may, in fact, be non-issues. They certainly appear to be implied in the JCO paper, but the wording isn't terribly detailed and could easily be giving me the wrong impression.

However, if either or both of these issues are real, they may affect the vast majority of "noncompliant" trials in this study. Given the fact that most clinical trials are either looking at new drugs, or looking at new indications for new drugs, these two issues may entirely explain the gap between the JCO study and the unequivocal FDA statements that contradict it.

I hope that, given the importance of transparency in research, the authors will be willing to post their data set publicly so that others can review their assumptions and independently verify their conclusions. It would be more than a bit ironic otherwise.

[Image credit: Shamless lawlessness via Flikr user willytronics.]


ResearchBlogging.org Thi-Anh-Hoa Nguyen, Agnes Dechartres, Soraya Belgherbi, and Philippe Ravaud (2013). Public Availability of Results of Trials Assessing Cancer Drugs in the United States JOURNAL OF CLINICAL ONCOLOGY DOI: 10.1200/JCO.2012.46.9577

Friday, June 21, 2013

Preview of Enrollment Analytics: Moving Beyond the Funnel (Shameless DIA Self-Promotion, Part 2)


Are we looking at our enrollment data in the right way?


I will be chairing a session on Tuesday on this topic, joined by a couple of great presenters (Diana Chung from Gilead and Gretchen Goller from PRA).

Here's a short preview of the session:



Hope to see you there. It should be a great discussion.

Session Details:

June 25, 1:45PM - 3:15PM

  • Session Number: 241
  • Room Number: 205B


1. Enrollment Analytics: Moving Beyond the Funnel
Paul Ivsin
VP, Consulting Director
CAHG Clinical Trials

2. Use of Analytics for Operational Planning
Diana Chung, MSc
Associate Director, Clinical Operations
Gilead

3. Using Enrollment Data to Communicate Effectively with Sites
Gretchen Goller, MA
Senior Director, Patient Access and Retention Services
PRA


Wednesday, June 19, 2013

Pediatric Trial Enrollment (Shameless DIA Self-Promotion, Part 1)


[Fair Warning: I have generally tried to keep this blog separate from my corporate existence, but am making an exception for two quick posts about the upcoming DIA 2013 Annual Meeting.]

Improving Enrollment in Pediatric Clinical Trials


Logistically, ethically, and emotionally, involving children in medical research is greatly different from the same research in adults. Some of the toughest clinical trials I've worked on, across a number of therapeutic areas, have been pediatric ones. They challenge you to come up with different approaches to introducing and explaining clinical research – approaches that have to work for doctors, kids, and parents simultaneously.

On Thursday June 27, Don Sickler, one of my team members, will be chairing a session titled “Parents as Partners: Engaging Caregivers for Pediatric Trials”. It should be a good session.

Joining Don are 2 people I've had the pleasure of working with in the past. Both of them combine strong knowledge of clinical research with a massive amount of positive energy and enthusiasm (no doubt a big part of what makes them successful).

However, they also differ in one key aspect: what they work on. One of them – Tristen Moors from Hyperion Therapeutics - works on an ultra-rare condition, Urea Cycle Disorder, a disease affecting only a few hundred children every year. On the other hand, Dr. Ann Edmunds is an ENT working in a thriving private practice. I met her because she was consistently the top enroller in a number of trials relating to tympanostomy tube insertion. Surgery to place “t-tubes” is one of the most common and routine outpatients surgeries there is, with an estimated half million kids getting tubes each year.

Each presents a special challenge: for rare conditions, how do you even find enough patients? For routine procedures, how do you convince parents to complicate their (and their children’s) lives by signing up for a multi-visit, multi-procedure trial?

Ann and Tristen have spent a lot of time tackling these issues, and should have some great advice to give.

For more information on the session, here’s Don’s posting on our news blog.