Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Quantitative Evaluation of Large Language Models to Streamline Radiology Report Impressions: A Multimodal Retrospective Analysis
107
Zitationen
6
Autoren
2024
Jahr
Abstract
Background The complex medical terminology of radiology reports may cause confusion or anxiety for patients, especially given increased access to electronic health records. Large language models (LLMs) can potentially simplify radiology report readability. Purpose To compare the performance of four publicly available LLMs (ChatGPT-3.5 and ChatGPT-4, Bard [now known as Gemini], and Bing) in producing simplified radiology report impressions. Materials and Methods In this retrospective comparative analysis of the four LLMs (accessed July 23 to July 26, 2023), the Medical Information Mart for Intensive Care (MIMIC)-IV database was used to gather 750 anonymized radiology report impressions covering a range of imaging modalities (MRI, CT, US, radiography, mammography) and anatomic regions. Three distinct prompts were employed to assess the LLMs' ability to simplify report impressions. The first prompt (prompt 1) was "Simplify this radiology report." The second prompt (prompt 2) was "I am a patient. Simplify this radiology report." The last prompt (prompt 3) was "Simplify this radiology report at the 7th grade level." Each prompt was followed by the radiology report impression and was queried once. The primary outcome was simplification as assessed by readability score. Readability was assessed using the average of four established readability indexes. The nonparametric Wilcoxon signed-rank test was applied to compare reading grade levels across LLM output. Results All four LLMs simplified radiology report impressions across all prompts tested (<i>P</i> < .001). Within prompts, differences were found between LLMs. Providing the context of being a patient or requesting simplification at the seventh-grade level reduced the reading grade level of output for all models and prompts (except prompt 1 to prompt 2 for ChatGPT-4) (<i>P</i> < .001). Conclusion Although the success of each LLM varied depending on the specific prompt wording, all four models simplified radiology report impressions across all modalities and prompts tested. © RSNA, 2024 <i>Supplemental material is available for this article.</i> See also the editorial by Rahsepar in this issue.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.