Thursday, October 11, 2012

TransCelerate and CDISC: The Relationship Explained


Updating my post from last month about the launch announcement for TransCelerate BioPharma, a nonprofit entity funded by 10 large pharmaceutical companies to “bring new medicines to patients faster”: one of the areas I had some concern about was in the new company's move into the “development of clinical data standards”.

How about we transcelerate
this website a bit?
Some much-needed clarification has come by way of Wayne Kubick, the CTO of CDISC. In an article in Applied Clinical Trials, he lays out the relationship in a bit more detail:
TransCelerate has been working closely with CDISC for several months to see how they can help us move more quickly in the development of therapeutic area data standards.  Specifically, they are working to provide CDISC with knowledgeable staff to help us plan for and develop data standards for more than 55 therapeutic areas over the next five years.
And then again:
But the important thing to realize is that TransCelerate intends to help CDISC achieve its mission to develop therapeutic area data standards more rapidly by giving us greater access to skilled volunteers to contribute to standards development projects.   
So we have clarification on at least one point: TransCelerate will donate some level of additional skilled manpower to CDISC-led initiatives.

That’s a good thing, I assume. Kubick doesn't mention it, but I would venture to guess that “more skilled volunteers” is at or near the top of CDISC's wish list.

But it raises the question: why TransCelerate? Couldn't the 10 member companies have contributed this employee time already? Did we really need a new entity to organize a group of fresh volunteers? And if we did somehow need a coordinating entity to make this happen, why not use an existing group – one with, say, a broader level of support across the industry, such as PhRMA?

The promise of a group like TransCelerate is intriguing. The executional challenges, however, are enormous: I think it will be under constant pressure to move away from meaningful but very difficult work towards supporting more symbolic and easy victories.

Tuesday, October 2, 2012

Decluttering the Dashboard


It’s Gresham’s Law for clinical trial metrics: Bad data drives out good. Here are 4 steps you can take to fix it.

Many years ago, when I was working in the world of technology startups, one “serial entrepreneur” told me about a technique he had used when raising investor capital for his new database firm:  since his company marketed itself as having cutting-edge computing algorithms, he went out and purchased a bunch of small, flashing LED lights and simply glued them onto the company’s servers.  When the venture capital folks came out for due diligence meetings, they were provided a dramatic view into the darkened server room, brilliantly lit up by the servers’ energetic flashing. It was the highlight of the visit, and got everyone’s investment enthusiasm ratcheted up a notch.

The clinical trials dashboard is a candy store: bright, vivid,
attractive ... and devoid of nutritional value.
I was reminded of that story at a recent industry conference, when I naively walked into a seminar on “advanced analytics” only to find I was being treated to an extended product demo. In this case, a representative from one of the large CROs was showing off the dashboard for their clinical trials study management system.

And an impressive system it was, chock full of bubble charts and histograms and sliders.  For a moment, I felt like a kid in a candy store.  So much great stuff ... how to choose?

Then the presenter told a story: on a recent trial, a data manager in Italy, reviewing the analytics dashboard, alerted the study team to the fact that there was an enrollment imbalance in Japan, with one site enrolling all of the patients in that country.  This was presented as a success story for the system: it linked up disparate teams across the globe to improve study quality.

But to me, this was a small horror story: the dashboard had gotten so cluttered that key performance issues were being completely missed by the core operations team. The fact that a distant data manager had caught the issue was a lucky break, certainly, but one that should have set off alarm bells about how important signals were being overwhelmed by the noise of charts and dials and “advanced visualizations”.

Swamped with high-precision trivia
I do not need to single out any one system or vendor here: this is a pervasive problem. In our rush to provide “robust analytic solutions”, our industry has massively overengineered its reporting interfaces. Every dashboard I've had a chance to review – and I've seen a lot of them – contain numerous instances of vividly-colored charts crowding out one another, with minimal sense of differentiating the significant from the tangential.

It’s Gresham’s Law for clinical trial metrics: Bad data drives out good. Bad data – samples sliced so thin they’ve lost significance, histograms of marginal utility made “interesting” (and nearly unreadable) by 3-D rendering, performance grades that have never been properly validated. Bad data is plentiful and much, much easier to obtain than good data.

So what can we do? Here are 4 initial steps to decluttering the dashboard:

1. Abandon “Actionable Analytics”
Everybody today sells their analytics as “actionable” [including, to be fair, even one company’s website that the author himself may be guilty of drafting]. The problem though is that any piece of data – no matter how tenuous and insubstantial -- can be made actionable. We can always think of some situation where an action might be influenced by it, so we decide to keep it. As a result, we end up swamped with high-precision trivia (Dr. Smith is enrolling at the 82nd percentile among UK sites!) that do not influence important decisions but compete for our attention. We need to stop reporting data simply because it’s there and we can report it.

2. Identify Key Decisions First
 The above process (which seems pretty standard nowadays) is backwards. We look at the data we have, and ask ourselves whether it’s useful. Instead, we need to follow a more disciplined process of first asking ourselves what decisions we need to make, and when we need to make them. For example:

  • When is the earliest we will consider deactivating a site due to non-enrollment?
  • On what schedule, and for which reasons, will senior management contact individual sites?
  • At what threshold will imbalances in safety data trigger more thorough investigation?

Every trial will have different answers to these questions. Therefore, the data collected and displayed will also need to be different. It is important to invest time and effort to identify critical benchmarks and decision points, specific to the needs of the study at hand, before building out the dashboard.

3. Recognize and Respect Context
As some of the questions about make clear, many important decisions are time-dependent.  Often, determining when you need to know something is every bit as important as determining what you want to know. Too many dashboards keep data permanently anchored over the course of the entire trial even though it's only useful during a certain window. For example, a chart showing site activation progress compared to benchmarks should no longer be competing for attention on the front of a dashboard after all sites are up and running – it will still be important information for the next trial, but for managing this trial now, it should no longer be something the entire team reviews regularly.

In addition to changing over time, dashboards should be thoughtfully tailored to major audiences.  If the protocol manager, medical monitor, CRAs, data managers, and senior executives are all looking at the same dashboard, then it’s a dead certainty that many users are viewing information that is not critical to their job function. While it isn't always necessary to develop a unique topline view for every user, it is worthwhile to identify the 3 or 4 major user types, and provide them with their own dashboards (so the person responsible for tracking enrollment in Japan is in a position to immediately see an imbalance).

4. Give your Data Depth
Many people – myself included – are reluctant to part with any data. We want more information about study performance, not less. While this isn't a bad thing to want, it does contribute to the tendency to cram as much as possible into the dashboard.

The solution is not to get rid of useful data, but to bury it. Many reporting systems have the ability to drill down into multiple layers of information: this capability should be thoughtfully (but aggressively!) used to deprioritize all of your useful-but-not-critical data, moving it off the dashboard and into secondary pages.

Bottom Line
The good news is that access to operational data is becoming easier to aggregate and monitor every day. The bad news is that our current systems are not designed to handle the flood of new information, and instead have become choked with visually-appealing-but-insubstantial chart candy. If we want to have any hope of getting a decent return on our investment from these systems, we need to take a couple steps back and determine: what's our operational strategy, and who needs what data, when, in order to successfully execute against it?


[Photo credit: candy store from flikr user msgolightly.]

Tuesday, September 25, 2012

What We Can Anticipate from TransCelerate


TransCelerate: Pharma's great kumbaya moment?
Last week, 10 of the largest pharmaceutical companies caused quite a hullaballoo in the research world with their announcement that they were anteing up to form a new nonprofit entity “to identify and solve common drug development challenges with the end goals of improving the quality of clinical studies and bringing new medicines to patients faster”. The somewhat-awkwardly-named TransCelerate BioPharma immediately got an enthusiastic reception from industry watchers and participants, mainly due to the perception that it was well poised to attack some of the systemic causes of delays and cost overruns that plague clinical trials today.

I myself was caught up in the breathless excitement of the moment, immediately tweeting after reading the initial report:

 Over the past few days, though, I've had time to re-read and think more about the launch announcement, and dial down my enthusiasm considerably.  I still think it’s a worthwhile effort, but it’s probably not fair to expect anything that fundamentally changes much in the way of current trial execution.

Mostly, I’m surprised by the specific goals selected, which seem for the most part either tangential to the real issues in modern drug development or stepping into areas where an all-big-pharma committee isn’t the best tool for the job. I’m also very concerned that a consortium like this would launch without a clearly-articulated vision of how it fits in with, and adds to, the ongoing work of other key players – the press release is loaded with positive, but extremely vague, wording about how TransCelerate will work with, but be different from, groups such as the CTTI and CDISC. The new organization also appears to have no formal relationship with any CRO organizations.  Given the crucial and deeply embedded nature of CROs in today’s research, this is not a detail to be worked out later; it is a vital necessity if any worthwhile progress is to be made.

Regarding the group’s goals, here is what their PR had to say:
Five projects have been selected by the group for funding and development, including: development of a shared user interface for investigator site portals, mutual recognition of study site qualification and training, development of risk-based site monitoring approach and standards, development of clinical data standards, and establishment of a comparator drug supply model.
Let’s take these five projects one by one, to try to get a better picture of TransCelerate’s potential impact:

1. Development of a shared user interface for investigator site portals

Depending on how it’s implemented, the impact of this could range from “mildly useful” to “mildly irksome”. Sure, I hear investigators and coordinators complain frequently about all the different accounts they have to keep track of, so having a single front door to multiple sponsor sites would be a relief. However, I don’t think that the problem of too many usernames cracks anyone’s “top 20 things wrong with clinical trial execution” list – it’s a trivial detail. Aggravating, but trivial.

Worse, if you do it wrong and develop a clunky interface, you’ll get a lot more grumbling about making life harder at the research site. And I think there’s a high risk of that, given that this is in effect software development by committee – and the committee is a bunch of companies that do not actually specialize in software development.

In reality, the best answer to this is probably a lot simpler than we imagine: if we had a neutral, independent body (such as the ACRP) set up a single sign-on (SSO) registry for investigators and coordinators, then all sponsors, CROs, and IVRS/IWRS/CDMS can simply set themselves up as service providers. (This works in the same way that many people today can log into disparate websites using their existing Google or Facebook accounts.)  TransCelerate might do better sponsoring and promoting an external standard than trying to develop an entirely new platform of its own.

2. Mutual recognition of study site qualification and training

This is an excellent step forward. It’s also squarely in the realm of “ideas so obvious we could have done them 10 years ago”. Forcing site personnel to attend multiple iterations of the same training seminars simply to ensure that you’ve collected enough binders full of completion certificates is a sad CYA exercise with no practical benefit to anyone.

This will hopefully re-establish some goodwill with investigators. However, it’s important to note that it’s pretty much a symbolic act in terms of efficiency and cost savings. Nothing wrong with that – heaven knows we need some relationship wins with our increasingly-disillusioned sites – but let’s not go crazy thinking that the represents a real cause of wasted time or money. In fact, it’s pretty clear that one of the reasons we’ve lived with the current site-unfriendly system for so long is that it didn’t really cost us anything to do so.

(It’s also worth pointing out that more than a few biotechs have already figured out, usually with CRO help, how to ensure that site personnel are properly trained and qualified without subjecting them to additional rounds of training.)

3. Development of risk-based site monitoring approach and standards

The consensus belief and hope is that risk-based monitoring is the future of clinical trials. Ever since FDA’s draft guidance on the topic hit the street last year, it’s been front and center at every industry event. It will, unquestionably, lead to cost savings (although some of those savings will hopefully be reinvested into more extensive centralized monitoring).  It will not necessarily shave a significant amount of time off the trials, since in many trials getting monitors out to sites to do SDV is not a rate-limiting factor, but it should still at the very least result in better data at lower cost, and that’s clearly a good thing.

So, the big question for me is: if we’re all moving in this direction already, do we need a new, pharma-only consortium to develop an “approach” to risk-based monitoring?

 First and foremost, this is a senseless conversation to have without the active involvement and leadership of CROs: in many cases, they understand the front-line issues in data verification and management far better than their pharma clients.  The fact that TransCelerate launched without a clear relationship with CROs and database management vendors is a troubling sign that it isn’t poised to make a true contribution to this area.

In a worst-case scenario, TransCelerate may actually delay adoption of risk-based monitoring among its member companies, as they may decide to hold off on implementation until standards have been drafted, circulated, vetted, re-drafted, and (presumably, eventually) approved by all 10 companies. And it will probably turn out that the approaches used will need to vary by patient risk and therapeutic area anyway, making a common, generic approach less than useful.

Finally, the notion that monitoring approaches require some kind of industry-wide “standardization” is extremely debatable. Normally, we work to standardize processes when we run into a lot of practical interoperability issues – that’s why we all have the same electric outlets in our homes, but not necessarily the same AC adaptors for our small devices.  It would be nice if all cell phone manufacturers could agree on a common standard plug, but the total savings from that standard would be small compared to the costs of defining and implementing it.  That’s the same with monitoring: each sponsor and each CRO have a slightly different flavor of monitoring, but the costs of adapting to any one approach for any given trial are really quite small.

Risk-based monitoring is great. If TransCelerate gets some of the credit for its eventual adoption, that’s fine, but I think the adoption is happening anyway, and TransCelerate may not be much help in reality.

4. Development of clinical data standards

This is by far the most baffling inclusion in this list. What happened to CDISC? What is CDISC not doing right that TransCelerate could possibly improve?

In an interview with Matthew Herper at Forbes, TransCelerate’s Interim CEO expands a bit on this point:
“Why do some [companies] record that male is a 0 and female is a 1, and others use 1 and 0, and others use M and F. Where is there any competitive advantage to doing that?” says Neil. “We do 38% of the clinical trials but 70% of the [spending on them]. IF we were to come together and try to define some of these standards it would be an enabler for efficiencies for everyone.”
It’s really worth noting that the first part of that quote has nothing to do with the second part. If I could wave a magic wand and instantly standardize all companies’ gender reporting, I would not have reduced clinical trial expenditures by 0.01%. Even if we extend this to lots of other data elements, we’re still not talking about a significant source of costs or time.

Here’s another way of looking at it: those companies that are conducting the other 62% of trials but are only responsible for 30% of the spending – how did they do it, since they certainly haven’t gotten together to agree on a standard format for gender coding?

But the main problem here is that TransCelerate is encroaching on the work of a respected, popular, and useful initiative – CDISC – without clearly explaining how it will complement and assist that initiative. Neil’s quote almost seems to suggest that he plans on supplanting CDISC altogether.  I don’t think that was the intent, but there’s no rational reason to expect TransCelerate to offer substantive improvement in this area, either.

5. Establishment of a comparator drug supply model

This is an area that I don’t have much direct experience in, so it’s difficult to estimate what impact TransCelerate will have. I can say, anecdotally, that over the past 10 years, exactly zero clinical trials I’ve been involved with have had significant issues with comparator drug supply. But, admittedly, that’s quite possibly a very unrepresentative sample of pharmaceutical clinical trials.

I would certainly be curious to hear some opinions about this project. I assume it’s a somewhat larger problem in Europe than in the US, given both their multiple jurisdictions and their stronger aversion to placebo control. I really can’t imagine that inefficiencies in acquiring comparator drugs (most of which are generic, and so not directly produced by TransCelerate’s members) represent a major opportunity to save time and money.

Conclusion

It’s important to note that everything above is based on very limited information at this point. The transcelerate.com website is still “under construction”, so I am only reacting to the press release and accompanying quotes. However, it is difficult to imagine at this point that TransCelerate’s current agenda will have more than an extremely modest impact on current clinical trials.  At best, it appears that it may identify some areas to cut some costs, though this is mostly through the adoption of risk-based monitoring, which should happen whether TransCelerate exists or not.

I’ll remain a fan of TransCelerate, and will follow its progress with great interest in the hopes that it outperforms my expectations. However, it would do us all well to recognize that TransCelerate probably isn’t going to change things very dramatically -- the many systemic problems that add to the time and cost of clinical trials today will still be with us, and we need to continue to work hard to find better paths forward.

[Update 10-Oct-2012: Wayne Kubick, the CTO of CDISC, has posted a response with some additional details around cooperation between TransCelerate and CDISC around point 4 above.]

Mayday! Mayday! Photo credit: "Wheatley Maypole Dance 2008" from flikr user net_efekt.