Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative Diagnostic Performance of a Multimodal Large Language Model (ChatGPT) versus a Dedicated ECG AI (ECG Buddy) in Detecting Myocardial Infarction from ECG Images (Preprint)
0
Zitationen
6
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Accurate and timely electrocardiogram (ECG) interpretation is critical for diagnosing myocardial infarction (MI) in emergency settings. Recent advances in multimodal Large Language Models (LLMs), such as Chat Generative Pre-trained Transformer (ChatGPT), have shown promise in clinical interpretation for medical imaging. However, whether these models analyze waveform patterns or simply rely on text cues remains unclear, underscoring the need for direct comparisons with dedicated ECG artificial intelligence (AI) tools. </sec> <sec> <title>OBJECTIVE</title> This study aimed to evaluate the diagnostic performance of ChatGPT, a general-purpose LLM, in detecting MI from ECG images and to compare its performance with that of ECG BuddyTM, a dedicated AI-driven ECG analysis tool. </sec> <sec> <title>METHODS</title> This retrospective study evaluated and compared AI models for classifying MI using a publicly available 12-lead ECG dataset from Pakistan, categorizing cases into MI-positive (239 images) and MI-negative (689 images). ChatGPT (GPT-4o, version 2024-11-20) was queried with five MI confidence options, whereas ECG Buddy for Windows analyzed the images based on ST-elevation MI, acute coronary syndrome, and myocardial injury biomarkers. </sec> <sec> <title>RESULTS</title> Among 928 ECG recordings (25.8% MI-positive), ChatGPT achieved an accuracy of 65.95% (95% confidence interval [CI]: 62.80–69.00), area under the curve (AUC) of 57.34% (95% CI: 53.44–61.24), sensitivity of 36.40% (95% CI: 30.30–42.85), and specificity of 76.20% (95% CI: 72.84–79.33). However, ECG Buddy reached an accuracy of 96.98% (95% CI: 95.67–97.99), AUC of 98.8% (95% CI: 98.3–99.43), sensitivity of 96.65% (95% CI: 93.51–98.54), and specificity of 97.10% (95% CI: 95.55–98.22). DeLong’s test confirmed that ECG Buddy significantly outperformed ChatGPT (all P < .001). In an error analysis of 40 cases, ChatGPT provided clinically plausible explanations in only 7.5% of cases, whereas 35% were partially correct, 40% were completely incorrect, and 17.5% received no meaningful explanation. </sec> <sec> <title>CONCLUSIONS</title> LLMs such as ChatGPT underperform relative to specialized tools such as ECG Buddy in ECG image-based MI diagnosis. Further training may improve ChatGPT; however, domain-specific AI remains essential for clinical accuracy. The high performance of ECG Buddy underscores the importance of specialized models for achieving reliable and robust diagnostic outcomes. </sec>
Ähnliche Arbeiten
A Real-Time QRS Detection Algorithm
1985 · 7.640 Zit.
An Overview of Heart Rate Variability Metrics and Norms
2017 · 6.477 Zit.
Power Spectrum Analysis of Heart Rate Fluctuation: A Quantitative Probe of Beat-to-Beat Cardiovascular Control
1981 · 5.066 Zit.
The impact of the MIT-BIH Arrhythmia Database
2001 · 4.524 Zit.
Decreased heart rate variability and its association with increased mortality after acute myocardial infarction
1987 · 3.994 Zit.