Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Large Language Models Encode Radiation Oncology Domain Knowledge: Performance on the American College of Radiology Standardized Examination
7
Zitationen
12
Autoren
2024
Jahr
Abstract
Introduction: The integration of large language models (LLMs) into medical education will represent a significant paradigm shift, offering transformative potential in how medical knowledge is accessed and assimilated. These models have not yet been systematically trained or validated on complex subspecialty medical examinations. This study explores the performance of seven major LLMs in radiation oncology. Materials and Methods: The 2021 American College of Radiology (ACR) Radiation Oncology In-Training Examination (TXIT) was used to evaluate the performance of various LLMs: OpenAI's GPT-3.5-turbo, GPT-4, GPT-4-turbo, Meta's Llama-2 models (7 billion, 13 billion, and 70 billion parameter models), and Google's PaLM-2-text-bison. The ACR provided publicly available national scoring for this examination. The examination comprised 300 questions across four major domains, including clinical, biology, physics, and statistics. The examination was processed through each LLM through application programming interface. LLM-generated answers were analyzed by domains and compared with radiation oncology trainee performance. The total cost of token inputs and outputs were aggregated and analyzed. Results: LLMs showed varied performance, with OpenAI's GPT-4-turbo leading with 74.2% correct answers and all three Llama-2 models underperforming (ranging between 26.2% and 43.3% correct). LLMs generally excelled in the statistics domain (93.0–100%) but were less effective in clinical areas (37.0–68.0%), with the exception of GPT-4-turbo that performed comparably (68.0%) with upper-level radiation oncology trainees (PGY4–5 64.1–68.3%) and superiorly with lower-level trainees (PGY2–3 51.6–61.6%). Notably, GPT-4-turbo demonstrated 7.0% clinical improvement over its predecessor GPT-4. LLMs scored the lowest in gastrointestinal, genitourinary, and gynecology and highest in bone and soft tissue, central nervous system, and head and neck. Overall costs of LLM inputs and outputs were modest at $2.63 across all seven models. Conclusion: GPT-4-turbo demonstrates clinical accuracy comparable with upper-level and superior with lower-level trainees. Score discrepancies across disease site domains may be due to data availability, complexity of medical conditions, and quality and quantity of training data sets. Future research will need to evaluate the performance of models that are fine-tune trained in clinical oncology. This study also underscores the need for rigorous validation of LLM-generated information against established medical literature and expert consensus, necessitating expert oversight in their application in medical education and practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.