Evidence-Based Medicine: Examining the Gap Between Rhetoric and Reality
Published Online: Thursday, October 17, 2013
Maurie Markman, MD
Editor-in-Chief of OncologyLive
Senior vice president for Clinical Affairs and National Director for Medical Oncology
Cancer Treatment Centers of America, Eastern Regional Medical Center
Today, there’s a mantra in oncology practice and other specialties that has become as exalted and unassailable as motherhood and apple pie. That mantra is: evidence-based medicine. We all want it, and why not?
Since the term began gaining popularity in the 1990s, evidence-based medicine generally has been defined as the “conscientious, explicit, and judicious use of current best evidence in making decisions about the care of individual patients.”1 Yet, even as that concept increasingly becomes a yardstick by which clinical strategies are evaluated and reimbursed, it has become e vident that the rhetoric and the reality are hardly identical.
As noted by one of this commentator’s colleagues: “Evidence-based medicine is for a patient in another’s family. For someone in my family, I would want treatment based on what is best for my family member’s individual needs.”
The public is not far behind in sharing this sentiment. An Internet-based survey conducted several years ago found that 83% of the respondents “were convinced that forcing doctors to follow guidelines would prevent them from tailoring care to individual patients and that no outside group should come between doctors and patients.”2
Of course, the issue is far more complicated than suggested by the label “evidence-based medicine” and the counter-argument that “guidelines” may actually impair the delivery of optimal medical care. However, that is the point: Medical management, especially in such complex illnesses as cancer, is complex and needs to be recognized as such.
How Statistics Don’t Always Add UpTwo highly respected commentators succinctly and quite eloquently frame the essential discussion and clearly highlight the profound dilemma. Speaking about his own diagnosis of fatal malignant mesothelioma, which he outlived for nearly 20 years, the late renowned paleontologist Stephen Jay Gould noted:
“But all evolutionary biologists know that variation itself is nature’s only irreducible essence. Variation is the hard reality, not a set of imperfect measures for a central tendency. Means and medians are the abstractions. Therefore, I looked at the mesothelioma statistics quite differently, and not only because I am an optimist who tends to see the doughnut instead of the hole, but primarily because I know that variation itself is the reality. I had to place myself amidst the variation.”3
Equally thought provoking on the topic of rigidly defining disease management based on the data generated from evidence-based (randomized) clinical trials are the words of the highly respected clinician Sherwin B. Nuland, MD, who writes the following in his book, The Uncertain Art: Thoughts on a Life in Medicine:
“Though the advent of correctives such as the randomized controlled clinical trial and the newly popular notion of evidence-based medicine may have lessened the uncertainty inherent in general principles of therapy, they are unlikely ... to usefully affect the care of an individual man, woman, or child to the degree claimed or predicted by their most adamant advocates. Given the spectrum of illness presentation, we will be left with what we have always had and always will have—the acceptance of the long-established principle that the practice of medicine is characterized by uncertainty and will always require judgment in order to be effective. That is its very nature.”4
Evidence Is Often LimitedBeyond the philosophical musings of these highly rational thinkers, we have hard facts. A recent report examining original articles published in The New England Journal of Medicine over a 10- year period (2001-2010) found that fully 40.2% (n = 146) of 363 articles that tested the utility of a standard of care in medical practice were unable to confirm its benefit compared with only 38% that “reaffirmed” the utility of the strategy.5 So, just how useful is evidence if 40% of the time a subsequent study will refute what today is considered standard? And, how can we ever know which 40% will be wrong—and which 38% are actually correct—when we are considering the use of such evidence to recommend therapy to an individual patient today?
Online CME Activities
Most Popular Right Now