In the ongoing debate regarding the role of randomized trials in defining the standard of care in cancer management, adherents of this so-called gold standard acknowledge the problems associated with conclusions drawn from prospective nonrandomized studies or retrospective analyses of patients managed with different approaches.
Maurie Markman, MD
In the ongoing debate regarding the role of randomized trials in defining the standard of care in cancer management, adherents of this so-called gold standard acknowledge the problems associated with conclusions drawn from prospective nonrandomized studies or retrospective analyses of patients managed with different approaches. The fundamental concern is the potential for “bias” or “clinical judgment” associated with therapeutic selection in the absence of randomization to meaningfully impact observed outcomes independent of the specific question being addressed.
The counter argument highlights that the basic requirements of the randomization process itself seek to optimize the homogeneity of the groups being studied for the purpose of isolating the utility of the strategy undergoing examination. As a result, this process likely excludes a substantial percentage of real-world patients (eg, elderly individuals with common comorbid conditions), thus conferring uncertain clinical value on the study results when they are ultimately reported in the peer-reviewed literature. For example, if a new antineoplastic has not been examined in individuals with mild to moderate renal dysfunction, how does a treating oncologist know the relative efficacy versus the toxicity of the approach in such patients?
Unfortunately, this is frequently where the debate over the shortcomings of the randomized trial ends. Supporters reluctantly admit such studies cannot be conducted in certain circumstances, but then focus on the dangers of employing alternative assessments. Critics emphasize the multiple problems with this paradigm, including the time and cost, the inability to complete trials, and the ultimate clinical relevance.
However, perhaps there is another lens through which to view the question of randomized studies in oncology. For certain questions, such studies are essential for a clinically meaningful endpoint, despite the time, effort, and cost they take to complete. In such situations, any other research strategy, such as a prospective observational study or a retrospective chart review, would be fundamentally unreliable and any results nothing more than hypothesis generating.A good example of a question that requires a carefully conducted, placebo-controlled randomized effort with expensive long-term follow-up to reach a scientifically valid endpoint is a recent report examining the utility of providing vitamin D and calcium supplementation in older women to reduce their subsequent risk of cancer.1
This multiyear, double-blind, placebo-controlled randomized trial failed to reveal evidence for a favorable cancer prevention outcome. It is difficult to envision how a meaningful result could have been generated to answer the question posed in the absence of such a study design, recognizing the heterogeneity of the population and the realistic potential that the “control” group would simply elect to take over-the-counter vitamin D and calcium on their own if a “placebo” were not utilized.
However, there were substantial costs associated with this effort and these study results were reported in the peer-reviewed literature 8 years after the trial was initiated in June 2009.1 An example of a question where randomization may not have been mandatory, but where the ability to successfully conduct such an effort has generated clinically relevant data, is provided by the results of a study examining the impact of laparoscopic hysterectomy versus abdominal hysterectomy on a survival outcome in women presenting with stage I endometrial cancer.2
In this setting, a large population-based study comparing the time to disease recurrence or overall survival of women undergoing these 2 approaches could have provided highly meaningful information; however, the decision-making process of individual clinicians (ie, selection bias) might have influenced the observed outcome independent of the selected procedure. The published data involving 760 patients revealed no difference in either disease-free or overall survival between the 2 surgical approaches.2
Finally, an example of where it is appropriate to question the time, effort, and cost associated with the initiation and completion of a randomized trial before there is acceptance of a standard approach is provided by the results of a multicenter retrospective analysis of the management of brain metastases in patients with EGFR mutation-positive non—small cell lung cancer previously untreated with an EGFR tyrosine kinase inhibitor.3 The important question being addressed was the optimal approach to disease control and, specifically, whether it is safe to delay radiation therapy by first delivering the systemic antineoplastic drug. Again, recognizing the limitations of all retrospective analyses, the investigators found a statistically significant (P = .001) and clinically meaningful decrease in survival associated with a delay in the delivery of local therapy.3
Although a phase III randomized trial involving hundreds of patients at multiple centers and conducted over a period of years could perhaps provide even more definitive results, are such results really necessary and should clinicians be required to wait this period of time before using the evidence to directly inform their own decision-making process? And finally, with the publication of these results in a high-impact, peer-reviewed oncology journal and the ethical requirement to fully inform potential research subjects about data that might be relevant to their participation in a clinical trial, recruiting patients into a study testing the utility of a strategy associated with inferior outcomes may be more than a little problematic.