Clinicians Must Evaluate Evidence Needed in Real-World Practice

Maurie Markman, MD
Published: Monday, Apr 17, 2017
 Maurie Markman, MD

Maurie Markman, MD

The concept of “level of evidence” is a critical component of the initial development and subsequent establishment of a standard of care in cancer management. Traditionally, early studies that focus on the safety and design of an optimal delivery strategy in a clinical setting are followed by trials that attempt to objectively measure the efficacy of an approach, which may finally lead to a direct comparison with the existing acceptable standard of care in a randomized phase III study.

Notably, the paradigm attached to such research relates far more to antineoplastic drug development than to other novel approaches in cancer management such as a new surgical technique. This state of affairs likely results from the prominent role that regulatory agencies such as the FDA play in the introduction of new pharmaceutical agents or combination drug regimens in contrast to less formalized innovation strategies in other areas of cancer care. Of course, third-party payers such as government and insurance companies are increasingly demanding specific levels of evidence even in the absence of regulatory agency mandates due to the rapid escalation in the costs of antineoplastic drugs.

However, there is another level of evidence to consider in cancer care that has nothing to do with the demands of a drug regulatory agency, the statistical requirements of peer-reviewed journals, or the fundamental role of randomization itself. This level of evidence is one that individual clinicians might require before they would suggest, recommend, or support the use of a particular approach in treating patients outside the realm of the mandates of governmental agencies or payers.

Consider, for example, a recent report suggesting the utility of scalp-cooling techniques to prevent chemotherapy-associated alopecia.1 The considerable negative impact of short-term complete or near-complete hair loss, particularly for women, has been well described, and the development of strategies to successfully manage this adverse event are surely worthy of considerable effort by clinicians and investigators.

In this study, 142 patients with stage I or II breast cancer were randomized to either undergo or not undergo scalp cooling with a proprietary device prior to receiving chemotherapy.1 A variety of cytotoxic regimens were included, with 36% of patients receiving an anthracycline and 64% receiving a taxane as a component of their care.

Approximately 50% of patients using the device experienced “successful hair preservation” compared with the completely expected “no patients” in the control arm. There were no serious adverse events recorded that were believed to result from the use of the device, and the difference in the primary study outcome was statistically significantly different (P = .0061) compared with the control group. In fact, the study was discontinued early for “superiority” reasons. The authors appropriately concluded that “further research is needed to assess longer-term efficacy and adverse effects” including overall survival, time to recurrence, and isolated scalp metastases.

What level of evidence should be required to con rm the absence of a serious negative impact on survival associated with use of scalp-cooling in this population? First, let us consider these factors: (a) the clearly measurable but less than overwhelming impact of the device on hair loss (50% of patients with “successful hair preservation” through 4 cycles of chemotherapy); (b) the existence of well-established and generally accepted—although far from ideal—strategies to effectively manage the short-term quality-of-life issues associated with this most unpleasant treatment adverse event; and (c) the theoretical concern that this procedure might interfere with an anti- neoplastic drug’s ability to produce a definitive cytotoxic effect on malignant cells previously seed- ing the scalp or circulating in the blood in this region during the cooling process.

In light of these questions, what data should be required for an oncologist to consider using this strategy in routine clinical practice, particularly where cure is the goal of therapy rather than palliation? Can this study provide a definitive answer to the question of recurrence risk?

The study comprised 142 patients, with the baseline risk de ned by only 63 women in the study’s control arm, and included patients with both stage I (40%) and stage II (60%) disease who received 1 of multiple chemotherapy regimens (8 were listed in the study results). There is an apparent imbalance in “fully active” ECOG performance status in favor of the patients who were managed with the cooling strategy (94%) versus the noncooling strategy (84%).

View Conference Coverage
Online CME Activities
TitleExpiration DateCME Credits
Community Practice Connections™: CDK4/6 Inhibitors With the Experts: The Role of Emerging Agents for the Management of Metastatic Breast CancerMay 30, 20182.0
Medical Crossfire®: Clinical Updates on PARP Inhibition and its Evolving Use in the Treatment of CancersMay 30, 20181.5
Publication Bottom Border
Border Publication