Evidence-Based Medicine: Examining the Gap Between Rhetoric and Reality

Publication
Article
Oncology Live®September 2013
Volume 14
Issue 9

In many circumstances, the absence of so-called definitive evidence does not equate with the absence of evidence, just the level required by some to declare an approach to be an acceptable standard-of-care option.

Maurie Markman, MD

Editor-in-Chief of OncologyLive

Senior vice president for Clinical Affairs and National Director for Medical Oncology Cancer Treatment Centers of America, Eastern Regional Medical Center

In many circumstances, the absence of so-called definitive evidence does not equate with the absence of evidence, just the level required by some to declare an approach to be an acceptable standard-of-care option.

Today, there’s a mantra in oncology practice and other specialties that has become as exalted and unassailable as motherhood and apple pie. That mantra is: evidence-based medicine. We all want it, and why not?

Since the term began gaining popularity in the 1990s, evidence-based medicine generally has been defined as the “conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”1 Yet, even as that concept increasingly becomes a yardstick by which clinical strategies are evaluated and reimbursed, it has become e vident that the rhetoric and the reality are hardly identical.

As noted by one of this commentator’s colleagues: “Evidence-based medicine is for a patient in another’s family. For someone in my family, I would want treatment based on what is best for my family member’s individual needs.”

The public is not far behind in sharing this sentiment. An Internet-based survey conducted several years ago found that 83% of the respondents “were convinced that forcing doctors to follow guidelines would prevent them from tailoring care to individual patients and that no outside group should come between doctors and patients.”2

Of course, the issue is far more complicated than suggested by the label “evidence-based medicine” and the counter-argument that “guidelines” may actually impair the delivery of optimal medical care. However, that is the point: Medical management, especially in such complex illnesses as cancer, is complex and needs to be recognized as such.

How Statistics Don’t Always Add Up

Two highly respected commentators succinctly and quite eloquently frame the essential discussion and clearly highlight the profound dilemma. Speaking about his own diagnosis of fatal malignant mesothelioma, which he outlived for nearly 20 years, the late renowned paleontologist Stephen Jay Gould noted:

“But all evolutionary biologists know that variation itself is nature’s only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently, and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation.”3

Equally thought provoking on the topic of rigidly defining disease management based on the data generated from evidence-based (randomized) clinical trials are the words of the highly respected clinician Sherwin B. Nuland, MD, who writes the following in his book, The Uncertain Art: Thoughts on a Life in Medicine:

“Though the advent of correctives such as the randomized controlled clinical trial and the newly popular notion of evidence-based medicine may have lessened the uncertainty inherent in general principles of therapy, they are unlikely ... to usefully affect the care of an individual man, woman, or child to the degree claimed or predicted by their most adamant advocates. Given the spectrum of illness presentation, we will be left with what we have always had and always will have—the acceptance of the long-established principle that the practice of medicine is characterized by uncertainty and will always require judgment in order to be effective. That is its very nature.”4

Evidence Is Often Limited

Beyond the philosophical musings of these highly rational thinkers, we have hard facts. A recent report examining original articles published in The New England Journal of Medicine over a 10- year period (2001-2010) found that fully 40.2% (n = 146) of 363 articles that tested the utility of a standard of care in medical practice were unable to confirm its benefit compared with only 38% that “reaffirmed” the utility of the strategy.5 So, just how useful is evidence if 40% of the time a subsequent study will refute what today is considered standard? And, how can we ever know which 40% will be wrong—and which 38% are actually correct—when we are considering the use of such evidence to recommend therapy to an individual patient today?

If we turn to the next level of the use of evidence— guidelines that seek to translate such evidence into clinical practice—we observe even more disquieting results regarding the universal objective validity of such approaches, which seek to define the best possible care. For example, an analysis of more than 1000 recommendations involving 10 guidelines produced by the National Comprehensive Cancer Network found that only 6% were at the highest level of evidence (category 1),6 considered to be randomized phase III trials or meta-analyses based on multiple such trials (Figures 1, 2). And, this degree of failure of guidelines to be based on the so-called “highest levels of evidence” rather than the “opinion of experts” is certainly not unique to the oncology arena.7,8

Figure 1. Categories of Evidence Supporting NCCN Guidelines

The figure illustrates the distribution of categories of evidence and consensus, with category I as the highest level of evidence, cited in NCCN guidelines, according to a 2011 study.

NCCN indicates National Comprehensive Cancer Network. Source: Poonacha TK, Go RS. J Clin Oncol. 2011; 29(2):186-191.

Finally, focusing specifically on cancer clinical trials, if we go even deeper into the domain of the “highest level of evidence,” we discover perhaps the most concerning feature of such evidence in its use in routine clinical practice. By their very nature, clinical trials are research exercises that seek to document the benefits/toxicity of a very specific intervention by isolating that factor from any other variables within a given population. Thus, studies will restrict entry by performance status, age, sex, the presence of comorbidities (eg, clinically relevant heart disease, obesity, diabetes, infection), current medications, and past treatments for the illness.

Rigidity May Harm Patients

As a result, while a given study may answer a quite important question—such as “Does Drug A improve survival compared with Drug B in patients with metastatic Cancer C when delivered as primary chemotherapy?”—the actual relevance of the answer in a large segment of the population of individuals either in this specific setting or a variation of this setting may be simply unknown.

For example, it is very well recognized that the elderly, the population in which cancers are most commonly observed and a growing percentage of the population in the United States, are strikingly underrepresented in cancer clinical trials.9 Thus, if Drug A has been shown in a randomized phase III trial to improve survival compared with Drug B in metastatic Cancer C when delivered in the frontline setting and has subsequently achieved regulatory (and third-party payer) approval for use in this setting, how should that evidence be employed if an oncologist is seeing an otherwise healthy 76-year-old patient with this condition when only a small percentage of patients in the evidence-based trial leading to this approval were greater than 75 years of age?9

Figure 2. Levels of Guideline Evidence, by Tumor Type

This figure reports the distribution of evidence categories used to support guidelines for 10 tumor types examined in a 2011 study.

Source: Poonacha TK, Go RS. J Clin Oncol. 2011; 29(2):186-191.

Similarly, assuming the same cancer setting but the patient in question has a history of an uncomplicated myocardial infarction four months prior to his cancer diagnosis, should Drug A be employed when individuals with such a history were specifically excluded from the evidence-based clinical trial leading to regulatory agency approval? Or perhaps, as Dr Nuland notes, this is an example of where “the practice of medicine is characterized by uncertainty and will always require judgment in order to be effective.”4

Further, what if the patient being seen with metastatic Cancer C had progressed following primary treatment where Drug A was not utilized and he/ she has no contraindications to receiving the new agent recently shown in the “evidence-based” trial to improve survival when delivered frontline? (In fact, in this scenario the particular individual in question might even have received the now known to be lesseffective Drug B as primary chemotherapy).

Should such a patient be denied the opportunity to experience the potentially meaningful clinical benefits documented in a similar but not identical setting simply because the therapy has not yet been studied in a phase III trial in this different but surely closely related situation? Here, and in multiple other circumstances, it is critical to acknowledge that the absence of so-called “definitive evidence” does not equate with the absence of evidence, just the level required by some to declare an approach to be an acceptable standard-of-care option.

The magnitude of the negative impact associated with such a rigid standard defining evidence-based medicine is reinforced when one remembers the high percentage of National Cancer Institute cooperative- group clinical trials that are initiated and never completed,10 or the time it may take from the generation of a new concept (eg, Drug A compared with Drug B in the second-line management of Cancer C) until its completion and subsequent reporting to the oncology community.11 And, it is highly likely that the cancer patient being seen today does not have the time to wait for this information to become available in the peer-reviewed literature, assuming such an event would ever occur.

In the opinion of this commentator, both a thoughtful and critical re-examination of the mantra of evidence-based medicine as it relates to optimal and personalized cancer management is required. Finally, until we are able to establish a more genuinely clinically meaningful definition that encompasses a far greater spectrum of the cancer experience, it is hoped that we can at least agree to acknowledge that so-called “evidence-based medicine” based on data from randomized phase III trials should help inform oncologic decision making at the individual patient level, but surely not define it.

Maurie Markman, MD, editor-in-chief of OncologyLive, is senior vice president for Clinical Affairs and national director for Medical Oncology at Cancer Treatment Centers of America. maurie. markman@ctca-hope.com

References

  1. Sackett DL, Rosenberg WMC, Gray JA Muir, et al. Evidence based medicine: what it is and what it isn’t [editorial]. BMJ. 1996; 312(7023): 71-72.
  2. Gerber AS, Patashnik, EM, Doherty D, et al. A national survey reveals public skepticism about research-based treatment guidelines. Health Aff. 2010; 29(10):1882-1884.
  3. Gould SJ. The median isn’t the message. Phoenix5 website. http://www.phoenix5.org/articles/GouldMessage.html. Reprinted from: Discover. 1985. Accessed August 19, 2013
  4. Nuland SB. Prooemium, an introduction to my book. In: The Uncertain Art: Thoughts on a Life in Medicine. 1st ed. New York, NY; Random House, LLC; 2008.
  5. Prasad V, Vandross A, Toomey C, et al. A decade of reversal: an analysis of 146 contradicted medical practices [published online ahead of print July 18, 2013]. Mayo Clin Proc. 2013;88(8):790-798.
  6. Poonacha TK, Go RS. Level of scientific evidence underlying recommendations arising from the National Comprehensive Cancer Network Clinical Practice Guidelines [published online ahead of print December 13, 2010]. J Clin Oncol. 2011; 29(2):186-191.
  7. Tricoci P, Allen JM, Kramer JM, et al. Scientific evidence underlying the ACC/AHA clinical practice guidelines. JAMA. 2009; 301(8):831-841.
  8. Lee DH, Vielemeyer O. Analysis of overall level of evidence behind Infectious Diseases Society of America Practice Guidelines. Arch Intern Med. 2011;171(1):18-22.
  9. Scher KS, Hurria A. Under-representation of older adults in cancer registration trials: known problem, little progress. J Clin Oncol. 2012; 30(17)2036-2038.
  10. Cheng SK, Dietrich MS, Dilts DM. A sense of urgency: evaluating the link between clinical trial development time and the accrual performance of cancer therapy evaluation (NCI-CTEP) sponsored studies [published online ahead of print November 10, 2010]. Clin Cancer Res. 2010;16(22):5557-5563.
  11. Nguyen T-A-H, Dechartres A, Belgherbi S, et al. Public availability of results of trials assessing cancer drugs in the United States. J Clin Oncol. 2013;31(24):2998-3003.

Related Videos