Maurie Markman, MD
Editor-in-Chief of OncologyLive
Senior vice president for Clinical Affairs and National Director for Medical Oncology
Cancer Treatment Centers of America, Eastern Regional Medical Center
All oncologists are very familiar with the established role of biological markers present within the specific cancers of individuals that help to define an optimal treatment strategy for a particular patient. Long before the days of a search for somatic mutations in cancer cells, the treating physician would wait for the results of tests revealing the presence or absence of the estrogen and progesterone receptors in a breast tumor to determine if hormonal treatment or chemotherapy should be employed. Although its demonstrated utility is a more recent event, oncologists now also routinely obtain testing for the presence of HER2 overexpression in their patients with breast cancer to predict the value of anti-HER2 monoclonal antibody therapy.
During the past decade, there has been a substantial increase in the number of tumor types and settings where biological/ molecular tumor markers have become established as the standard of care in disease management in that they predict for the potential efficacy (eg, EGFR
mutation in non-small cell lung cancer; BRAF
mutation in metastatic melanoma) or lack of efficacy (eg, KRAS
mutation in colon cancer) of a particular drug or class of antineoplastics.
Yet as the options for use of these markers grow, physicians may find themselves increasingly facing dilemmas about the specific criteria to apply when recommending for or against treatment based on statistical models.
First, it is important to acknowledge that there are two related, but actually quite separate, justifications for the aggressive search for such biomarkers.
The first justification is clinical, where it is anticipated that the discovery of relevant molecular abnormalities will substantially improve the benefit-to-risk calculation associated with antineoplastic therapy. The ability to more effectively target tumors would increase the probability of a favorable effect, while avoiding unnecessary toxicity if the response rate turned out to be lower than predicted. For markers that simply define a very limited opportunity for benefit, only the avoidance of toxicity is achieved, but this is certainly a worthy goal if, in fact, a low response rate is accurately predicted.
The second major justification is the recognized cost of the new antineoplastics, where a per-cycle price tag of $5000 to $10,000 is an increasingly common scenario. Avoiding the use of a particular drug anticipated to have a low response rate would certainly reduce the individual patient, family, insurance company, governmental, and overall societal costs associated with that therapy.
However, while the need for biomarkers for both of these reasons is not a matter of debate, there is a related, less welldefined, but absolutely critical issue that is rarely mentioned in the discussion of the use of these diagnostic tools to help define appropriate care: What are the specific criteria for determining whether it is appropriate to use a given biomarker to make a “yes” or “no” treatment decision?
Although the academic exercise of defining the relative percentage of patients who respond or do not respond to a given treatment, or who progress less than six months or more than 12 months following chemotherapy, makes for both an interesting analysis and a well-received peer-reviewed journal article, the issue of paying for a drug based on a “cut-off” value is a real-life situation that a patient and family may perceive, correctly or incorrectly, as a matter of life or death.
Thus, there will likely be widespread acceptance among interested parties for both the clinical utility of a molecular marker and its subsequent routine use to make payment decisions if, for example, a “positive” test is reliably defined as a less than 1% opportunity for a patient to achieve any evidence of benefit (eg, shrinkage of tumor mass, prolongation of time to disease progression, improvement in overall survival).
However, what if the test is shown to unequivocally separate patients in a highly statistically significant manner (eg, P
<.001 or P
<.0001) into two groups: one with a 70% to 80% probability of achieving a clinically relevant response, and a second with only a 20%, or even only a 5% to 10%, opportunity of benefit?