OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 07:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating Large Language Models in Echocardiography Reporting: Opportunities and Challenges

2024·10 ZitationenOpen Access
Volltext beim Verlag öffnen

10

Zitationen

14

Autoren

2024

Jahr

Abstract

Abstract Background The increasing need for diagnostic echocardiography (echo) tests presents challenges in preserving the quality and promptness of reports. While Large Language Models (LLMs) have proven effective in summarizing clinical texts, their application in echo remains underexplored. Aims To evaluate open-source LLMs in echo report summarization. Methods Adult echo studies conducted at the Mayo Clinic from January 1, 2017, to December 31, 2017, were categorized into two groups: development (all Mayo locations except Arizona) and Arizona validation sets. We adapted open-source LLMs (Llama-2, MedAlpaca, Zephyr, and Flan-T5) using In-Context Learning (ICL) and Quantized Low-Rank Adaptation (QLoRA) fine-tuning for echo report summarization from “Findings” to “Impressions.” Against cardiologist-generated Impressions, the models’ performance was assessed both quantitatively with automatic metrics and qualitatively by cardiologists. Results The development dataset included 97,506 reports from 71,717 unique patients, predominantly male (55.4%), with an average age of 64.3±15.8 years. EchoGPT, a QLoRA fine-tuned Llama-2 model, outperformed other LLMs with win rates ranging from 87% to 99% in various automatic metrics, and produced reports comparable to cardiologists in qualitative review (significantly preferred in conciseness (p< 0.001), with no significant preference in completeness, correctness, and clinical utility). Correlations between automatic and human metrics were fair to modest, with the best being RadGraph F1 scores versus clinical utility (r=0.42) and automatic metrics showed insensitivity (0-5% drop) to changes in measurement numbers. Conclusions EchoGPT can generate draft reports for human review and approval, helping to streamline the workflow. However, scalable evaluation approaches dedicated to echo reports remains necessary. Clinical Perspectives 1. What is new? This study evaluated multiple open-source LLMs and different model adaptation methods in echocardiography report summarization. The resulting system, EchoGPT, can generate echo reports comparable in quality to cardiologists. Future metrics for echo report quality should emphasize factual correctness, especially on numerical measurements. 2. What are the clinical implications? EchoGPT system demonstrated the potential of introducing LLMs into echocardiography practice to generate draft reports for human review and approval.

Ähnliche Arbeiten