Donate to Science & Enterprise

S&E on Mastodon

S&E on LinkedIn

S&E on Flipboard

Please share Science & Enterprise

Re-Analyzed Clinical Trial Data Often Draw New Conclusions

John Ioannidis

John Ioannidis (Norbert von der Groeben, Stanford University)

10 September 2014. Researchers at Stanford University in California, with colleagues in Canada, analyzed decades of clinical trials to find a large proportion of the small number of studies where data were re-analyzed came to different conclusions from the original authors. The team led by Stanford medical professor John Ioannidis published its findings in today’s issue of the Journal of the American Medical Association (paid subscription required).

Ioannidis — with colleagues from Stanford, University of Toronto, University of Ottawa, and McMaster University in Hamilton, Ontario — studied the extent to which clinical trial data were re-analyzed, as an indicator of openness in sharing these findings and encouraging trust in the results. Ioannidis cites in a university statement the continuing intense debate over the value of oseltamivir, a flu medication marketed as Tamiflu, as an example of the problems that a lack of trial data openness can cause.

The researchers screened thousands articles reporting on clinical trials from over 3 decades in NIH’s Medline database, to find only a minute fraction of published clinical trials were independently re-analyzed. From the original sample of more than 3,000 reports, only 37 published a further analysis of previous clinical trials. And of that already small number, only 5 studies were conducted by analysts who were all different from the original authors.

A review of the reanalyzed data shows a sizable percentage of the studies reported conclusions that varied from the original studies. Of the 37 reports, 13 (35%) suggested different populations — including larger or smaller numbers of patients — could benefit from the treatments being tested, or recommended a different type of intervention.

Some of the re-analyzed studies used different methods for analyzing data, which resulted in varying conclusions. Other re-analyzed data, however, identified errors, such as including patients that should not have been part of the trial.

Ioannidis says drawing different conclusions does not always mean the original results were biased or falsified. He notes that “making the raw data of trials available for re-analyses is essential not only for re-evaluating whether the original claims were correct, but also for using these data to perform additional analyses of interest and combined analyses.” These additional analyses, he adds, can identify new clinical questions or even reduce the need for further trials.

Read more:

*     *     *

Comments are closed.