Monday, September 16, 2013

Questionable Enrollment Math at the UK's NIHR

There has been considerable noise coming out of the UK lately about successes in clinical trial enrollment.

First, a couple months ago came the rather dramatic announcement that clinical trial participation in the UK had "tripled over the last 6 years". That announcement, by the chief executive of the
Sweet creature of bombast: is Sir John
writing press releases for the NIHR?
National Institute of Health Research's Clinical Research Network, was quickly and uncritically picked up by the media.

That immediately caught my attention. In large, global trials, most pharmaceutical companies I've worked with can do a reasonable job of predicting accrual levels in a given country. I like to think that if participation rates in any given country had jumped that heavily, I’d have heard something.

(To give an example: looking at a quite-typical study I worked on a few years ago: UK sites were overall slightly below the global average. The highest-enrolling countries were about 2.5 times as fast. So, a 3-fold increase in accruals would have catapulted the UK from below average to the fastest-enrolling country in the world.)

Further inquiry, however, failed to turn up any evidence that the reported tripling actually corresponded to more human beings enrolled in clinical trials. Instead, there is some reason to believe that all we witnessed was increased reporting of trial participation numbers.

Now we have a new source of wonder, and a new giant multiplier coming out of the UK. As the Director of the NIHR's Mental Health Research Network, Til Wykes, put it in her blog coverage of her own paper:
Our research on the largest database of UK mental health studies shows that involving just one or two patients in the study team means studies are 4 times more likely to recruit successfully.
Again, amazing! And not just a tripling – a quadrupling!

Understand: I spend a lot of my time trying to convince study teams to take a more patient-focused approach to clinical trial design and execution. I desperately want to believe this study, and I would love having hard evidence to bring to my clients.

At first glance, the data set seems robust. From the King's College press release:
Published in the British Journal of Psychiatry, the researchers analysed 374 studies registered with the Mental Health Research Network (MHRN).
Studies which included collaboration with service users in designing or running the trial were 1.63 times more likely to recruit to target than studies which only consulted service users.  Studies which involved more partnerships - a higher level of Patient and Public Involvement (PPI) - were 4.12 times more likely to recruit to target.
But here the first crack appears. It's clear from the paper that the analysis of recruitment success was not based on 374 studies, but rather a much smaller subset of 124 studies. That's not mentioned in either of the above-linked articles.

And at this point, we have to stop, set aside our enthusiasm, and read the full paper. And at this point, critical doubts begin to spring up, pretty much everywhere.

First and foremost: I don’t know any nice way to say this, but the "4 times more likely" line is, quite clearly, a fiction. What is reported in the paper is a 4.12 odds ratio between "low involvement" studies and "high involvement" studies (more on those terms in just a bit).  Odds ratios are often used in reporting differences between groups, but they are unequivocally not the same as "times more likely than".

This is not a technical statistical quibble. The authors unfortunately don’t provide the actual success rates for different kinds of studies, but here is a quick example that, given other data they present, is probably reasonably close:

  • A Studies: 16 successful out of 20 
    • Probability of success: 80% 
    • Odds of success: 4 to 1
  • B Studies: 40 successful out of 80
    • Probability of success: 50%
    • Odds of success: 1 to 1

From the above, it’s reasonable to conclude that A studies are 60% more likely to be successful than B studies (the A studies are 1.6 times as likely to succeed). However, the odds ratio is 4.0, similar to the difference in the paper. It makes no sense to say that A studies are 4 times more likely to succeed than B studies.

This is elementary stuff. I’m confident that everyone involved in the conduct and analysis of the MHRN paper knows this already. So why would Dr Wykes write this? I don’t know; it's baffling. Maybe someone with more knowledge of the politics of British medicine can enlighten me.

If a pharmaceutical company had promoted a drug with this math, the warning letters and fines would be flying in the door fast. And rightly so. But if a government leader says it, it just gets recycled verbatim.

The other part of Dr Wykes's statement is almost equally confusing. She claims that the enrollment benefit occurs when "involving just one or two patients in the study team". However, involving one or two patients would seem to correspond to either the lowest ("patient consultation") or the middle level of reported patient involvement (“researcher initiated collaboration”). In fact, the "high involvement" categories that are supposed to be associated with enrollment success are studies that were either fully designed by patients, or were initiated by patients and researchers equally. So, if there is truly a causal relationship at work here, improving enrollment would not be merely a function of adding a patient or two to the conversation.

There are a number of other frustrating aspects of this study as well. It doesn't actually measure patient involvement in any specific research program, but uses just 3 broad categories (that the researchers specified at the beginning of each study). It uses an arbitrary and undocumented 17-point scale to measure "study complexity", which collapses and quite likely underweights many critical factors into a single number. The enrollment analysis excluded 11 studies because they weren't adequate for a factor that was later deemed non-significant. And probably the most frustrating facet of the paper is that the authors share absolutely no descriptive data about the studies involved in the enrollment analysis. It would be completely impossible to attempt to replicate its methods or verify its analysis. Do the authors believe that "Public Involvement" is only good when it’s not focused on their own work?

However, my feelings about the study and paper are an insignificant fraction of the frustration I feel about the public portrayal of the data by people who should clearly know better. After all, limited evidence is still evidence, and every study can add something to our knowledge. But the public misrepresentation of the evidence by leaders in the area can only do us harm: it has the potential to actively distort research priorities and funding.

Why This Matters

We all seem to agree that research is too slow. Low clinical trial enrollment wastes time, money, and the health of patients who need better treatment options.

However, what's also clear is that we lack reliable evidence on what activities enable us to accelerate the pace of enrollment without sacrificing quality. If we are serious about improving clinical trial accrual, we owe it to our patients to demand robust evidence for what works and what doesn’t. Relying on weak evidence that we've already solved the problem ("we've tripled enrollment!") or have a method to magically solve it ("PPI quadrupled enrollment!") will cause us to divert significant time, energy, and human health into areas that are politically favored but less than certain to produce benefit. And the overhyping those results by research leadership compounds that problem substantially. NIHR leadership should reconsider its approach to public discussion of its research, and practice what it preaches: critical assessment of the data.

[Update Sept. 20: The authors of the study have posted a lengthy comment below. My follow-up is here.]
 
[Image via flikr user Elliot Brown.]


ResearchBlogging.org Ennis L, & Wykes T (2013). Impact of patient involvement in mental health research: longitudinal study. The British journal of psychiatry : the journal of mental science PMID: 24029538


5 comments:

Simon Denegri said...

It is really interesting and helpful to have this perspective Paul. I am one of those who has brazenly pushed this evidence as for all its faults it does also put some harder numbers on the impact of PPI.
Have you written to Til to get her perspective on the points you have made? How would you suggest we build a more robust evidence base?
Your support would be most welcome by this and others of my colleagues who find research institutions and their staff quite resistant to public involvement.

Paul Ivsin said...

Simon,

Thanks for your comments. I'm all for getting hard numbers, but I think the hard numbers need to go hand-in-hand with hard analysis. This study is limited, but it's still a nice first step - what I object to is the public representation of the study results.

You say you've "brazenly pushed" this evidence. Does that mean that you believe that involving a patient on the study team will make a study 4 times more likely to enroll successfully? What's your take on the issues raised?

I have no doubt that many sites are very resistant to patient involvement in their study designs. However, I worry that their resistance will only harden if they feel that data is being juked to support an opposing position.

Thanks,
Paul

Til Wykes and Liam Ennis said...

There were a number of points you made in your blog and the title of questionable maths was what caught our eye and so we reply on facts and provide context.

Firstly, this is a UK study where the vast majority of UK clinical trials take place in the NHS. It is about patient involvement in mental health studies - an area where recruitment is difficult because of stigma and discrimination.

1. Tripling of studies - You dispute NIHR figures recorded on a national database and support your claim with a lone anecdote - hardly data that provides confidence. The reason we can improve recruitment is that NIHR has a Clinical Research Network which provides extra staff, within the NHS, to support high quality clinical studies and has improved recruitment success.
2. Large database: We have the largest database of detailed study information and patient involvement data - I have trawled the world for a bigger one and NIMH say there certainly isn't one in the USA. This means few places where patient impact can actually be measured
3. Number of studies: The database has 374 studies which showed among other results that service user involvement increased over time probably following changes by funders e.g. NIHR requests information in the grant proposal on how service users have been and will be involved - one of the few national funders to take this issue seriously.
4. Analysis of patient involvement involves the 124 studies that have completed. You cannot analyse recruitment success unless then. The complexity measure was developed following a Delphi exercise with clinicians, clinical academics and study delivery staff to include variables likely to be barriers to recruitment. It predicts delivery difficulty (meeting recruitment & delivery staff time). But of course you know all that as it was in the paper.
6. All studies funded by NIHR partners were included – we only excluded studies funded without peer review, not won competitively. For the involvement analysis we excluded industry studies because of not being able to contact end users and where inclusion compromised our analysis reliability due to small group sizes.

I am sure you are aware of the high standing of the journal and its robust peer review. We understand that our results must withstand the scrutiny of other scientists but many of your comments were unwarranted. This is the first in the world to investigate patient involvement impact. No other databases apart from the one held by the NIHR Mental Health Research Network is available to test – we only wish they were.

Your comment on the media coverage of odds ratios is an issue that scientists need to overcome (there is even a section in Wikipedia). You point out the base rate issue but of course in a logistic regression you also take into account all the other variables that may impinge on the outcome prior to assessing the effects of our key variable patient involvement - as we did – and showed that the odds ratio is 4.12 - So no dispute about that. We have followed up our analysis to produce a statement that the public will understand. Using the following equations:
Model predicted recruitment lowest level of involvement exp(2.489-.193*8.8-1.477)/(1+exp(2.489-.193*8.8-1.477))=0.33
Model predicted recruitment highest level of involvement exp(2.489-.193*8.8-1.477+1.415)/(1+exp(2.489-.193*8.8-1.477+1.415)=0.67
For a study of typical complexity without a follow up increasing involvement from the lowest to the highest levels increased recruitment from 33% to 66% i.e. a doubling. This is important and is the first time that impact has been shown for patient involvement on the study success. Luckily in the UK we have a network that now supports clinicians to be involved and a system for ensuring study feasibility. The addition of patient involvement is the additional bonus that allows recruitment to increase over time and so cutting down the time for treatments to get to patients.

Paul Ivsin said...

I thank the authors for taking the time to provide a lengthy response. I will consider the points raised, and hope to have a follow-up post very soon.

However, I will highlight one key sentence now. You write:

"Your comment on the media coverage of odds ratios is an issue that scientists need to overcome (there is even a section in Wikipedia)."

It's highly unfair to blame "media coverage" for the use of an odds ratio as if it were a relative risk ratio. In fact, the first instance of "4 times more likely" appears in Dr Wykes's own blog post. It's repeated in the KCL press release, so you yourselves appear to have been the source of the error.

If I have misunderstood this somehow, please let me know.

Thanks,
Paul

Adam Jacobs said...

Interesting post, Paul.

As I see it, there are two quite separate issues here.

1. Has the number of patients participating in clinical trials tripled in the last 6 years?

2. Does involving patients in the design of research make studies 4 times more likely to hit their recruitment targets?

Let's start with No 1. I share your skepticism that a tripling of participation is real. I've had a look at the links you cite, and as far as I can tell what the figures are showing is a tripling of patients recruited into studies within the NIHR Clinical Research Network Portfolio. I haven't looked in enough detail to assess whether the claim that that figure really has tripled is robust, but if we accept for the sake of argument that it is, then that is absolutely not the same thing as saying that the number of patients recruited into clinical research in the UK has tripled. Some studies in the UK are part of the NIHR CRN, and some aren't. Looking at the figures for NIHR CRN studies alone tells us nothing about the total: if the number of NIHR CRN studies is increasing, it could be because more studies are being done overall, or it could be because a greater proportion of the total studies are being done within the NIHR CRN.

There is nothing in the information provided that I can see that allows us to distinguish between those possibilities. So the claim that the number of patients overall has trebled appears to lack evidence.

Turning to the second issue, that's a bit of a no-brainer. Of course an odds ratio of 4 is not the same thing as "4 times more likely", as you have already explained very clearly.

It's also worth noting, of course, that the odds ratio of 4 is not derived from randomised data, so we can't conclude that higher levels of patient involvement cause better recruitment. All we can conclude is that they are associated with better recruitment, but we can't rule out that this may be because of some confounding factor. Note also that the confidence interval around that odds ratio of 4 is quite wide. They don't report it in the paper, but based on the standard error for the beta coefficient that they report, I make the confidence interval 1.1 to 15. A pretty wide range.

That said, it seems entirely plausible that better patient involvement really does cause better recruitment, so I don't want to be too skeptical about that result.

It's also worth noting that greater involvement of patients in the design of research is something of a hot topic for NIHR at the moment. Here the causal relationships get really complicated! Is it a hot topic because of mounting research evidence showing its benefits, or is the research showing its benefits being talked up inappropriately because it's a hot topic?

I honestly don't know the answer to that question, but I suspect it's a bit of both.