Maurie Markman, MD
The decades-long argument over whether zinc lozenges can shorten the duration of the common cold is far removed from the arena of cancer medicine, yet the studies conducted to settle the debate help illustrate the shortcomings of the clinical trial model that has dominated the oncology sphere. The impact of zinc lozenges on people with colds has been highly controversial since the 1980s, with results from randomized studies both supporting and refuting the benefits of this rather simple, relatively inexpensive, and essentially nontoxic strategy. A recent meta-analysis examined individual patient data from 3 randomized, placebo-controlled trials that evaluated the effects of 80 to 92 mg/day of elemental zinc and concluded that use of the active agent resulted in a faster recovery.1
Seventy percent of the zinctreated patients recovered by day 5 compared with 27% of individuals who received placebo.1
And no serious adverse effects were observed.
But this is not quite the end of the story… The investigators also noted that despite the “strong evidence that properly composed zinc lozenges can increase the rate of recovery from the common cold, the majority of zinc lozenges on the market appear to [either] have too low doses of zinc or they contain substances that bind zinc ions, such as citric acid.”1
So after years of research and numerous clinical trials, one can conclude that although zinc lozenges may be of benefit in the duration of the common cold, the optimal dose and schedule may be unknown.
How does this happen? Why has it been so difficult to successfully conduct definitive trials of such a simple strategy to define an optimal dose/schedule of zinc that can quickly be converted to clinical use? Is it possible that perhaps there is an issue with clinical trials themselves?
In the oncology sphere, the experience with the zinc lozenge studies is not terribly unique. Neither is the routine acceptance of study conclusions as being objectively meaningful as long as the so-called gold standard of statistical significance has been attained. Consider, for example, the recent report of a phase III randomized trial that examined the clinical utility of adding bevacizumab to carboplatin-paclitaxel chemotherapy in women with recurrent, potentially platinum-sensitive ovarian cancer.2
The study added a second question about the role of secondary surgical cytoreduction prior to the delivery of systemic therapy (an ongoing component of this trial), which only compounded the complexity of the study’s interpretation.
Remarkably, the primary study endpoint was overall survival (OS), which may have been justified for the surgical question but was a highly questionable choice for a second-line ovarian cancer chemotherapy study in which previously reported trials—including those with the addition of bevacizumab—had failed to reveal an improvement in OS despite a statistically significant improvement in progression-free survival (PFS).3
And in fact, the study in question failed to achieve its prospectively defined intention-to-treat statistical endpoint of an improvement in OS with bevacizumab versus chemotherapy (median OS, 42.2 vs 37.3 months, respectively; HR, 0.829; P
But wait. That is not the end of the story. By identifying “incorrect treatment interval stratification data” (affecting 7% of patients), a “sensitivity analysis of overall survival based on the audited treatment-free interval stratification gave an adjusted HR of 0.823 (P
And with this, the magical threshold revealing improved OS had been achieved! As a result, the peer-reviewed manuscript could now conclude that this strategy “improved the median overall survival in patients with platinum-sensitive recurrent ovarian cancer.”2
This statistical manipulation could convert a “negative trial” into one for which the authors could declare that “our sensitivity analysis based on corrected treatment-free interval stratification indicates that this strategy might be an important addition to the therapeutic armamentarium in these patients.”
What? Would that conclusion not have been appropriate if only the PFS had been improved with the addition of bevacizumab, as clearly demonstrated in this study (HR, 0.628; P
<.0001) and, as previously noted, if it had been the case in another very similar clinical trial?3
What is the logic behind this positive conclusion other than the questionable decision that the primary endpoint of this trial should be OS?