The model of an independent and self-regulating academic medical community continues to be challenged through widely reported accusations of serious scientific misconduct, failure of investigators to report potential financial conflicts of interest, and suggestions of inadequate transparency related to the questionable role of academic researchers in the analysis and reporting of industry-sponsored clinical trial results.
Maurie Markman, MD
The model of an independent and self-regulating academic medical community continues to be challenged through widely reported accusations of serious scientific misconduct,1 failure of investigators to report potential financial conflicts of interest,2 and suggestions of inadequate transparency related to the questionable role of academic researchers in the analysis and reporting of industry-sponsored clinical trial results.3
But there is another even more complex issue that has received far less discussion and debate within the walls of academia that raises concerns about the ability of this community to monitor and regulate its own individual members and institutions adequately.
A recent commentary discussing the painful and now very public internal conflict within Cochrane,4 arguably the premier medically oriented scientific research consortium in the world, has shined a spotlight on the issue of “intellectual conflict of interest.”5 The term refers to a concern that although scientific investigation should proceed from the development of a hypothesis that requires objectively impartial and rigorous analysis, this process can be derailed by an unacceptable degree of bias (fundamental “beliefs”) that an individual investigator or research group may bring to the subject matter being evaluated.
The implications of an intellectual conflict of interest can be profound. This might include the scientifically unjustified rejection or acceptance of a submitted scientific grant or manuscript that the individual in question has the responsibility for reviewing. In the case of Cochrane, where a severe critic was dismissed from the board, one wonders just how objective this individual could be in evaluating the results of meta-analyses concerning drugs—which is the focus of Cochrane—when he is reported to have “likened the pharmaceutical industry to ‘organized crime.’ ”5
Perhaps of even greater concern is a scenario in which a senior academic leader (eg, department or division chair, journal editor) may inappropriately advance or seriously impair the career of another academic with whom they may disagree without an independent, objective evaluation of the scientific quality or the value of the scholarship to be advanced.
Although it is not difficult to make a claim of bias, either a positive or negative assertion, it is most difficult to prove such a statement or even to be reasonably certain that this is the major reason a given grant or manuscript has been accepted or rejected.
Further, potential bias concerning a favorable or unfavorable view of a research project or submitted publication may in reality represent the biases of a collective of individuals rather than a single person. This unfortunate outcome may result from a group’s standard training or prior mentoring experiences, but the point is that this training may fail the test of open-minded as well as clinically valid scientific rigor. The concern here is that if one has an inherent bias when approaching a particular clinical subject, the conclusions reached by an investigator (either positive or negative) may matter more than the critical details of how the supporting analysis was conducted and reported. And that can be a serious, if not a fatal, flaw.Consider, for example, a peer-reviewed publication in a high-impact medical journal that made the assertion that the desire of a patient with cancer to employ complementary medicine was somehow tied to a “refusal of conventional therapy” and may result in decreased overall survival for individuals with “curable cancers.” (The opinions expressed here regarding the flaws in this manuscript and the justification for its publication are my observations and do not represent the opinions of any other individual or organization.)
The authors of this report retrospectively examined the National Cancer Data Base involving almost 2 million individuals with nonmetastatic cancers of the breast, lung, prostate, and colon/rectum from 2004 to 2013.6 Within this patient population, the investigators found 258 (approximately 0.01%) who were classified as having “used other unproven treatments administered by nonmedical personnel,” which the investigators then elected to define as “complementary medicine.”
What possible justification can these investigators provide for a direct link between the 0.01% of individuals in this large database who used “unproven treatments administered by nonmedical personnel” with the now well-established 50% to more than 80% of patients with cancer who openly declare that they employ 1 or more of a wide variety of complementary/integrative medical strategies (including psychosocial and nutritional support, a focus on spirituality, etc) during their cancer journey?6
Given the widespread adoption of these techniques, it is remarkable that these investigators would attempt to ascribe the attitudes, beliefs, and outcomes of so many patients—conservatively speaking, 50% of the overall study population or at least 1 million people—to a total of 258 individuals.
However, these investigators are not deterred by this inconvenient detail (“intellectual conflict of interest?”) and proceed to undertake a detailed statistical analysis involving this minute population. They ultimately declare: “In this study patients who received complementary medicine were more likely to refuse additional conventional cancer treatment and had a higher risk of death. The results suggest that mortality risk associated with complementary medicine was mediated by the refusal of conventional cancer therapy.”6
Based on the considerations highlighted above, how can this possibly be considered valid clinical investigation?
So the final question to be asked in this commentary is as follows: How could a paper with such severe methodological flaws and stunning bias in the terminology employed by the authors have been accepted for publication in a high-impact, rigorously peer-reviewed medical journal? Although it is not possible to provide a definitive answer to this inquiry, one wonders whether this outcome has anything to do with a potential intellectual conflict of interest among those who decided to permit the manuscript’s publication.