Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Efficacy of Fine-Tuned Large Language Model in CT Protocol Assignment as Clinical Decision-Supporting System
12
Zitationen
9
Autoren
2025
Jahr
Abstract
Accurate CT protocol assignment is crucial for optimizing medical imaging procedures. The integration of large language models (LLMs) may be helpful, but its efficacy as a clinical decision support system for protocoling tasks remains unknown. This study aimed to develop and evaluate fine-tuned LLM specifically designed for CT protocoling, as well as assess its performance, both standalone and in concurrent use, in terms of effectiveness and efficiency within radiological workflows. This retrospective study included radiology tests for contrast-enhanced chest and abdominal CT examinations (2829/498/941 for training/validation/testing). Inputs involve the clinical indication section, age, and anatomic coverage. The LLM was fine-tuned for 15 epochs, selecting the best model by macro sensitivity in validation. Performance was then evaluated on 800 randomly selected cases from the test dataset. Two radiology residents and two radiologists assigned CT protocols with and without referencing the output of LLM to evaluate its efficacy as a clinical decision support system. The LLM exhibited high accuracy metrics, with top-1 and top-2 accuracies of 0.923 and 0.963, respectively, and a macro sensitivity of 0.907. It processed each case in an average of 0.39 s. The LLM, as a clinical decision support tool, improved accuracy both for residents (0.913 vs. 0.936) and radiologists (0.920 vs. 0.926 without and with LLM, respectively), with the improvement for residents being statistically significant (p = 0.02). Additionally, it reduced reading times by 14% for residents and 12% for radiologists. These results indicate the potential of LLMs to improve CT protocoling efficiency and diagnostic accuracy in radiological practice.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.