Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessment of ChatGPT’s Compliance with ESC-Acute Coronary Syndrome Management Guidelines at 30-Day Intervals
6
Zitationen
2
Autoren
2024
Jahr
Abstract
<b>Background:</b> Despite ongoing advancements in healthcare, acute coronary syndromes (ACS) remain a leading cause of morbidity and mortality. The 2023 European Society of Cardiology (ESC) guidelines have introduced significant improvements in ACS management. Concurrently, artificial intelligence (AI), particularly models like ChatGPT, is showing promise in supporting clinical decision-making and education. <b>Methods:</b> This study evaluates the performance of ChatGPT-v4 in adhering to ESC guidelines for ACS management over a 30-day interval. Based on ESC guidelines, a dataset of 100 questions was used to assess ChatGPT's accuracy and consistency. The questions were divided into binary (true/false) and multiple-choice formats. The AI's responses were initially evaluated and then re-evaluated after 30 days, using accuracy and consistency as primary metrics. <b>Results:</b> ChatGPT's accuracy in answering ACS-related binary and multiple-choice questions was evaluated at baseline and after 30 days. For binary questions, accuracy was 84% initially and 86% after 30 days, with no significant change (<i>p</i> = 0.564). Cohen's Kappa was 0.94, indicating excellent agreement. Multiple-choice question accuracy was 80% initially, improving to 84% after 30 days, also without significant change (<i>p</i> = 0.527). Cohen's Kappa was 0.93, reflecting similarly high consistency. These results suggest stable AI performance with minor fluctuations. <b>Conclusions:</b> Despite variations in performance on binary and multiple-choice questions, ChatGPT shows significant promise as a clinical support tool in ACS management. However, it is crucial to consider limitations such as fluctuations and hallucinations, which could lead to severe issues in clinical applications.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.