Where Overall Survival Falls Short as a Gauge

Maurie Markman, MD
Published: Tuesday, Jan 09, 2018
Maurie Markman, MD
Maurie Markman, MD
From the perspective of a patient and that patient’s family, it is completely understandable that the single most important goal of an antineoplastic strategy is to prolong survival and, if possible, produce a cure. Further, it is reasonable to argue that this basic philosophy of the aim of treatment has been quite nicely translated into the regulatory environment of drug development, where overall survival (OS) has traditionally been considered the definitive as well as gold standard outcome and endpoint in cancer therapeutic clinical trials.

An additional advantage of using this strategy is its ease and objectivity. The date of beginning a particular regimen and the date of death can appropriately be defined without time-wasting and senseless debate. In addition, if an audit of these metrics is necessary, it should be easy to do.

A final highly relevant historical argument in support of OS as the gold standard in cancer trials is based on the limited clinical utility of most available oncology therapeutics when the current drug regulatory paradigms were established. In fact, in the early days of the modern chemotherapeutic era, antineoplastic drug therapy was, with a few notable exceptions, marginally effective and objectively associated with only modest short-term survival benefits. For patients with advanced, metastatic, or recurrent cancers following the failure of local-regional therapy, survival was most often measured in months, if not days, and very rarely in years.

Further, considering the toxicity (eg, severe emesis, neutropenia) of most single cytotoxic agents and combination regimens examined and employed in this era, it would be rational for those conducting the trials (industry, academic centers) and the governmental regulators considering approval of an agent for commercial sale to require that the therapy demonstrate an ability to improve OS in a particular setting (eg, first-line treatment of epithelial ovarian cancer) before accepting it as a standard-of-care therapeutic option. In fact, the concept of clinically effective second-line therapy that might favorably affect subsequent OS following a patient’s completion of a given clinical trial would have been considered pure fantasy in the vast majority of circumstances.

The modern world of cancer therapeutics bears strikingly limited resemblance to the era described above—despite some quite distressing exceptions, such as treatment options for advanced/metastatic pancreatic cancer.

Biologically and clinically active antineoplastic drug regimens employed as a component of initial therapy (adjuvant or neoadjuvant strategies, metastatic disease) or as second-line or later approaches in disease management have favorably affected both quantity and quality of life. In an increasing number of settings, patients with an advanced or metastatic cancer, while still unable to be cured, are objectively able to lead a high-quality life for a number of years rather than months or even less time. As a result, it is not unreasonable to label such long-term conditions as very serious but chronic. But what does this discussion have to do with endpoints in cancer clinical trials and, specifically, in epithelial ovarian cancer? There are 2 quite powerful arguments that can be provided in response to this highly relevant question.

Higher Inputs for Reliable Data

The first argument is related to how subsequent treatments on posttrial outcomes and survival have affected the way clinical trials are conducted in the current oncology arena and will be conducted in the future. Perhaps the most definitive argument against a disturbingly senseless belief that OS must always remain the gold standard in cancer clinical trials comes from a landmark analysis by a leading group of cancer biostatisticians.

They calculated the sample size required for a particular study to reveal a statistically significant improvement in OS.1 In their hypothetical scenario, in which a patient population of 280 individuals would be required to show a median 3-month improvement in progression-free survival (PFS), the total population needed to demonstrate a difference in OS for a median post-progression survival of 2 versus 24 months would be 350 versus 2440 patients, respectively. Stated differently, if patients were to live for a median of 2 years following the completion of a trial, versus a median of only 2 months following completion, 7 times as many patients would have to be enrolled in the study for it to reliably demonstrate an improvement in ovarian survival.

This stunning difference in mandated sample size is clearly related to the multiple other therapeutics likely to be employed in individual patients over this extended time period, plus death due to other events, including common comorbidities in the patient populations (eg, cardiac disease). And it is critical to acknowledge that survival in the range of 24 months after first-line therapy for ovarian cancer is an increasingly realistic scenario.

View Conference Coverage
Online CME Activities
TitleExpiration DateCME Credits
Online Medical Crossfire®: 5th Annual Miami Lung Cancer ConferenceMay 30, 20196.5
Community Practice Connections™: How Do We Leverage PARP Inhibition Strategies in the Contemporary Treatment of Breast Cancer?May 31, 20191.5
Publication Bottom Border
Border Publication