(To make this even more execrable, Pharmalot actually calls this "Deaths attributed to clinical trials" in his opening sentence, although the actual data has exactly nothing to do with the attribution of the death.)
In fairness, Pharmalot is really only sharing the honors with a group of sensationalistic journalists in India who have jumped on these numbers. But it has a much wider readership within the research community, and could have at least attempted to critically assess the data before repeating it (along with criticism from "experts").
The number of things wrong with this metric is a bit overwhelming. I’m not even sure where to start. Some of the obvious issues here:
1. No separation of trial-related versus non-trial-related. Some effort is made to explain that there may be difficulty in determining whether a particular death was related to the study drug or not. However, that obscures the fact that the PDC lumps together all deaths, whether they took an experimental medication or not. That means the PDC includes:
- Patients in control arms receiving standard of care and/or placebo, who died during the course of their trial.
- Patients whose deaths were entirely unrelated to their illness (eg, automobile accident victims)
3. No sensitivity to trial design. Many late-stage cancer clinical trials use Overall Survival (OS) as their primary endpoint – patients are literally in the trial until they die. This isn’t considered unethical; it’s considered the gold standard of evidence in oncology. If we ran shorter, less thorough trials, we could greatly reduce the PDC – would that be good for anyone?
|Case Study: Zelboraf|
|FDA: "Highly effective, more personalized therapy"|
|PDC: "199 deaths attributed to Zelboraf trial!"|
So, for publicizing a metric that has zero utility, and using it to cast aspersions on the ethics of researchers, we congratulate Pharmalot and the PDC.