Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
AI-Automated Risk Operative Stratification for Severe Aortic Stenosis: A Proof-of-Concept Study
0
Zitationen
10
Autoren
2025
Jahr
Abstract
<b>Background</b>: Accurate operative risk stratification is essential for treatment selection in severe aortic stenosis. We developed an automated workflow using large language models (LLMs) to replicate Heart Team risk assessment. <b>Methods</b>: We retrospectively analyzed 231 consecutive patients with severe aortic stenosis evaluated by multidisciplinary Heart Teams (January 2022-December 2024). An automated system using GPT-4o was developed, comprising the following: (1) structured data extraction from clinical dossiers; (2) EuroSCORE II calculation via two methods (algorithmic vs. LLM-based); (3) clinical vignette generation; and (4) risk stratification comparing EuroSCORE-based thresholds versus guideline-integrated LLM approaches with/without EuroSCORE values. The primary endpoint was the risk stratification accuracy of each method compared to Heart Team decisions. <b>Results:</b> Mean age was 79.5 ± 7.7 years, with 58.4% female. The automated workflow processed patients in 32.6 ± 6.4 s. The LLM-calculated EuroSCORE II showed a lower mean difference from Heart Team values (-1.42%, 95% CI -2.32 to -0.53) versus algorithmic calculation (-1.88%, 95% CI -2.38 to -1.38). For risk stratification, the guideline-integrated LLM without EuroSCORE achieved the highest accuracy (90.0%) and AUC (0.93), outperforming both the EuroSCORE-based (accuracy 50.2% for high-risk, AUC 0.63) and guideline-integrated LLM with EuroSCORE approaches (accuracy 82.4%, AUC 0.76). However, systematic hallucinations occurred for cardiovascular risk factors when data were missing. <b>Conclusions:</b> LLMs accurately calculated EuroSCORE II and achieved 90% concordance with multidisciplinary Heart Team decisions. However, hallucinations, reproducibility concerns, and the absence of clinical outcome validation preclude direct clinical application.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.336 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.207 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.607 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.476 Zit.