Donate to Science & Enterprise

S&E on Mastodon

S&E on LinkedIn

S&E on Flipboard

Please share Science & Enterprise

Technique Assessed for Comparing Effectiveness Between Drugs



31 July 2018. An academic-industry research team validated a statistical technique to evaluate the effectiveness of different drugs treating the same disease, without conducting separate head-to-head clinical trials. Researchers from Harvard University’s School of Public Health, the economics consulting firm Analysis Group, and two pharmaceutical companies presented their findings today at this year’s Joint Statistical Meetings, or JSM, sponsored by American Statistical Association and other national statistical societies in Vancouver, British Columbia, Canada.

A team led by Harvard biostatistics postdoctoral researcher David Cheng, who presented the JSM paper, is seeking better tools to evaluate the performance of new drugs, when other drugs for the same disease are on the market. In most cases, experimental therapies are tested in clinical trials against a placebo, or the standard of care, a widely used and established form of treatment. But this approach does not help assess the value of a new drug when other drugs treating the same disorder are already available. Clinical trials testing one drug against another are sometimes conducted, but these head-to-head comparisons are time consuming and expensive.

In this situation, notes Cheng in an American Statistical Association statement, medical decision makers rely on summary data published from clinical trials of different drugs. “They’d look, say, at the rates of survival for a cancer drug by a given time in one study and then compare them to another, even though the two studies would not be directly comparable,” says Cheng. “The patients might have more late-stage disease in one study and more early-stage disease in the other, or some other significant difference in patient characteristics, and this wouldn’t be taken into account in the analysis. You’d end up with massive confounding.”

A statistical technique called matching-adjusted indirect comparison, or MAIC, is emerging that makes possible better assessments of different drugs, but the researchers say has not yet been widely studied. MAIC takes data for individual patients in the trials and matches them on key characteristics. The technique then weights the individual participants on their propensity to be included in the group receiving the experiment treatment or the control group receiving the placebo or standard of care. These adjustments, says the team, make it possible to compare data between clinical trials at a more granular level.

In their paper Cheng and colleagues applied MAIC to two antiretroviral treatments for HIV-1, the HIV strain affecting most people with the disease. The results, say the researchers, show MAIC is feasible, particularly when supplemented with methods for calculating standard errors and confidence intervals, which they assessed in simulations. These additional computations help estimate the variations when comparing participants in different trials.

The authors say their findings can refine MAIC techniques, already in use by some drug reimbursement cases, and part of the guidance offered by National Institute of Clinical Excellence in the U.K. “This work can help decision-makers understand when MAIC results are reliable and when there are challenges in the data that would produce unreliable results,” adds Cheng. “This could, in turn, enable better decision-making and ultimately inform smarter allocation of resources to drugs that work best.”

More from Science & Enterprise:

*     *     *

1 comment to Technique Assessed for Comparing Effectiveness Between Drugs