Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large language models in oncology: a review
14
Zitationen
8
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have demonstrated emergent human-like capabilities in natural language processing, leading to enthusiasm about their integration in healthcare environments. In oncology, where synthesising complex, multimodal data is essential, LLMs offer a promising avenue for supporting clinical decision-making, enhancing patient care, and accelerating research. This narrative review aims to highlight the current state of LLMs in medicine; applications of LLMs in oncology for clinicians, patients, and translational research; and future research directions. Clinician-facing LLMs enable clinical decision support and enable automated data extraction from electronic health records and literature to inform decision-making. Patient-facing LLMs offer the potential for disseminating accessible cancer information and psychosocial support. However, LLMs face limitations that must be addressed before clinical adoption, including risks of hallucinations, poor generalisation, ethical concerns, and scope integration. We propose the incorporation of LLMs within compound artificial intelligence systems to facilitate adoption and efficiency in oncology. This narrative review serves as a non-technical primer for clinicians to understand, evaluate, and participate as active users who can inform the design and iterative improvement of LLM technologies deployed in oncology settings. While LLMs are not intended to replace oncologists, they can serve as powerful tools to augment clinical expertise and patient-centred care, reinforcing their role as a valuable adjunct in the evolving landscape of oncology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.
Autoren
Institutionen
- University of Toronto(CA)
- Princess Margaret Cancer Centre(CA)
- McMaster University(CA)
- University of California, San Francisco(US)
- BC Cancer Agency(CA)
- University of British Columbia(CA)
- Machine Intelligence Research Institute(US)
- Harvard University(US)
- Boston Children's Hospital(US)
- Mass General Brigham(US)
- Dana-Farber Cancer Institute(US)
- Spinal Cord Injury BC(CA)