OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 21:07

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Leveraging Large Language Models to Generate Clinical Histories for Oncologic Imaging Requisitions

2025·25 Zitationen·Radiology
Volltext beim Verlag öffnen

25

Zitationen

15

Autoren

2025

Jahr

Abstract

Background Clinical information improves imaging interpretation, but physician-provided histories on requisitions for oncologic imaging often lack key details. Purpose To evaluate large language models (LLMs) for automatically generating clinical histories for oncologic imaging requisitions from clinical notes and compare them with original requisition histories. Materials and Methods In total, 207 patients with CT performed at a cancer center from January to November 2023 and with an electronic health record clinical note coinciding with ordering date were randomly selected. A multidisciplinary team informed selection of 10 parameters important for oncologic imaging history, including primary oncologic diagnosis, treatment history, and acute symptoms. Clinical notes were independently reviewed to establish the reference standard regarding presence of each parameter. After prompt engineering with seven patients, GPT-4 (version 0613; OpenAI) was prompted on April 9, 2024, to automatically generate structured clinical histories for the 200 remaining patients. Using the reference standard, LLM extraction performance was calculated (recall, precision, F1 score). LLM-generated and original requisition histories were compared for completeness (proportion including each parameter), and 10 radiologists performed pairwise comparison for quality, preference, and subjective likelihood of harm. Results For the 200 LLM-generated histories, GPT-4 performed well, extracting oncologic parameters from clinical notes (F1 = 0.983). Compared with original requisition histories, LLM-generated histories more frequently included parameters critical for radiologist interpretation, including primary oncologic diagnosis (99.5% vs 89% [199 and 178 of 200 histories, respectively]; <i>P</i> < .001), acute or worsening symptoms (15% vs 4% [29 and seven of 200]; <i>P</i> < .001), and relevant surgery (61% vs 12% [122 and 23 of 200]; <i>P</i> < .001). Radiologists preferred LLM-generated histories for imaging interpretation (89% vs 5%, 7% equal; <i>P</i> < .001), indicating they would enable more complete interpretation (86% vs 0%, 15% equal; <i>P</i> < .001) and have a lower likelihood of harm (3% vs 55%, 42% neither; <i>P</i> < .001). Conclusion An LLM enabled accurate automated clinical histories for oncologic imaging from clinical notes. Compared with original requisition histories, LLM-generated histories were more complete and were preferred by radiologists for imaging interpretation and perceived safety. © RSNA, 2025 <i>Supplemental material is available for this article.</i> See also the editorial by Tavakoli and Kim in this issue.

Ähnliche Arbeiten