Artificial Technology and the Future of Cancer Care: The Good, Bad and Potentially Very Ugly

Publication
Article
Oncology Live®Vol. 24/No. 17
Volume 24
Issue 17
Pages: 10

In Partnership With:

It is difficult to browse a major medical journal these days and not find an article, commentary, or editorial that discusses the objectively rather profound implications for clinical science and health care delivery resulting from simply stunning advances in computer technology in the arena of artificial intelligence.

Maurie Markman, MD

Maurie Markman, MD

It is difficult to browse a major medical journal these days and not find an article, commentary, or editorial that discusses the objectively rather profound implications for clinical science and health care delivery resulting from simply stunning advances in computer technology in the arena of artificial intelligence (AI).

These discussions range from the somewhat frivolous, including how an AI program scores on a medical licensing examination, to algorisms that might be employed to improve interpretation of radiographic images or pathology material, to the potential for impressively written but somewhat inaccurate or completely erroneous (and possibly harmful) information widely disseminated on various social media platforms. It is important to note that such misinformation may be inadvertent, perhaps resulting from use of a less well-developed AI-program, or quite deliberate with the overt intent to create societal dysfunction.1

While it is likely that both patients and physicians will
be attracted to the potential ability for AI to quickly provide answers to specific and often complex disease- and manage- ment-related questions, appropriate concerns have been raised in the medical literature for privacy considerations, the current lack of regulatory or quality oversight, and the potential for liability risks.2-5

Science-related organizations, from academic/peer- reviewed journals to funding agencies, have had to rather quickly develop policies for whether AI-assisted publications,6-8 grant writing, and peer review will be permitted, and it is likely that there will be more developments in these arenas as the quality of AI-products improves and it becomes more difficult to distinguish human from nonhuman productions.

Turning to the domain of clinical medicine, concern has been raised (supported by reports in the peer-reviewed literature) that when clinicians employ, or rely upon, AI decision support, their non–AI-assisted diagnostic skills may suffer, especially if great care is not taken to appre- ciate the limitations of the data available to establish the decision-support algorisms.9

Finally, it should be noted that other even more worrisome concerns have been raised about AI strategies within the scientific and broader societal domains, including the chilling suggestion that such highly sophisticated tools could be employed to create bioweapons.10

One of the more concerning aspects of the accelerating complexity of the AI products is the observation by some that even their developers admit they do not fully understand the inner workings and self-evolving potential of what they have created.

In a recently published, highly provocative book, “The Age of AI and our Human Future,” the authors note the following:11

"At the same time, a network platform’s AI follows a logic that is nonhuman and, in many ways, inscrutable to humans. For example, in practice, when an AI-enabled network platform is assessing an image, social media post, or search query, humans may not understand precisely
how the AI operates in that particular situation. While Google’s engineers know that their AI-enabled search func- tion produced clearer results than it would have without AI, they could not always explain why one particular result was ranked higher than another. To a large extent, AI is judged by the utility of its results, not the process used to reach those results. This signals a shift in priorities from earlier eras, when each stem in a mental or mechanical process was either experienced by a human being (a thought, a conver- sation, an administrative process) or could be paused, inspected, and repeated by human beings.”

Yet even when openly acknowledging the appropriate concerns and perhaps frightening unknowns highlighted above, there is clearly legitimate and highly meaningful potential for AI to assist clinicians by providing critical support in multiple decision-making processes within clinical medicine in general and oncology in particular.

The words in the preceding sentence have been carefully selected. There is no intent here to suggest a commercial AI product will ever be responsible for making a pathological diagnosis of cancer or independently determine the final report of a radiographic study. Rather, a legitimate goal will be to effectively assist responsible clinicians (eg, pathologist, radiologist, infectious disease specialist, intensivist, etc) as they strive to deliver efficient, medically optimal, and error-free care.

Clinical medicine is an art as much as it is a science, and the subjective nature of symptoms and individual psychological responses to illness can be as important in diagnostic and treatment decisions as objective findings; thus, AI may complement the role but, in the opinion of this commentator, almost certainly will never replace the well-trained and experienced clinician.12

However, despite the efforts of even the most conscientious clinicians, errors in diagnosis are made with the potential for serious harm, as noted in a recent commentary regarding misdiagnoses in patients being evaluated in emergency room visits.13 One can envision the use of AI to assist clinicians in triaging, diagnosing, and managing the multiple potential scenarios, from the most serious to mundane, seen in this busy health care environment.

While many examples for the practical use of AI, particularly within the realm of anatomic pathology14 and radiographic screening,15 can be high- lighted within oncology, a highly relevant future challenge for AI will be to help reduce the recognized risk for the misdiagnosis of cancer.16

References

  1. Kidd C, Birhane A. How AI can distort human beliefs. Science. 2023;380(6651):1222-1223. doi:10.1126/ science.adi0248
  2. Kanter GP, Packel EA. Health care privacy risks of AI chatbots. JAMA. 2023;330(4):311-312. doi:10.1001/ jama.2023.9618
  3. Marks M, Haupt CE. AI chatbots, health privacy, and challenges to HIPAA compliance. JAMA. 2023; 330(4):309-310. JAMA. 2023;330(4):309-310. doi:10.1001/jama.2023.9458
  4. Minssen T, Vayena E, Cohen G. The challenges for regulating medical use of ChatGPT and other large language models. JAMA. 2023;330(4):315-316. doi:10.1001/jama.2023.9651
  5. Duffourc M, Gerke S, et al. Generative AI in health care and liability risks for physicians and safety concerns for patients. JAMA. 2023;330(4):313-314. doi:10.1001/jama.2023.9630
  6. Brainard J. Journals take up arms against AI-written text. Science. 2023;379(6634):740-741. doi:10.1126/ science.adh2762
  7. Flanagin A, Bibbins-Domingo K, Berkwits M, Christiansen SL. Nonhuman “authors” and implications for the integrity of scientific publication and medical knowledge. JAMA. 2023;329(8):637-639. doi:10.1001/ jama.2023.1344
  8. Kaiser J. Funding agencies say no to AI peer review. Science. 2023;381(6655):261. doi:10.1126/science.adj8309
  9. Cabitza F, Rasoini R, Gensini GF. Unintended consequences of machine learning in medicine. JAMA.2017;318(6):517-518. doi:10.1001/jama.2017.7797
  10. Service RF. Could chatbots help devise the next pandemic virus? Science. 2023;380(6651):1211.doi:10.1126/science.adj3377
  11. Kissinger HA, Schmidt E, Huttenlocher D. The Age of AI: And Our Human Future. Little, Brown & Company; 2021.
  12. Kulkarni PA, Singh H. Artificial intelligence in clinical diagnosis: opportunities, challenges, and hype. JAMA.2023;330(4):317-318. doi:10.1001/jama.2023.11440
  13. Edlow JA, Pronovost PJ. Misdiagnosis in the emergency department: time for a system solution. JAMA.2023;329(8):631-632. doi:10.1001/jama.2023.0577
  14. Park S, Ock C-Y, Kim H, et al. Artificial intelligence-powered spatial analysis of tumor-infiltrating lymphocytesas complementary biomarker for immune checkpoint inhibitor in non-small-cell lung cancer. J Clin Oncol.2022;40(17):1916-1928. doi:10.1200/JCO.21.02010
  15. Lang K, Josefsson V, Larsson AM, et al. Artificial intelligence-supported screen reading versus standarddouble reading in the Mammography Screening with Artificial Intelligence trial (MASAI): a clinical safety analysis of a randomized, controlled, non-inferiority, single-blinded, screening accuracy study. Lancet Oncol. 2023;24(8):936-944. doi:10.1016/S1470-2045(23)00298-X
  16. Sekeres MA. Are you sure you have cancer? Wall Street Journal. August 25, 2023. Accessed September 18, 2023. https://www.wsj.com/articles/are-you-sure-you-have-cancer-diagnosis-drug-treatment-trial-drug- 383cb293
Related Videos
A panel of 5 experts on lung cancer
A panel of 5 experts on lung cancer
Ibrahim Aldoss, MD
A panel of 5 experts on lung cancer
A panel of 5 experts on lung cancer
Ibrahim Aldoss, MD
A panel of 5 experts on lung cancer
A panel of 5 experts on lung cancer
A panel of 6 experts on colorectal cancer
A panel of 6 experts on colorectal cancer