Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Assessing the Quality of Artificial Intelligence Explanations on Atrial Fibrillation: A Comparative Analysis of ChatGPT and Google Gemini (Preprint)
0
Zitationen
8
Autoren
2025
Jahr
Abstract
<sec> <title>BACKGROUND</title> Atrial fibrillation (AF), a common arrhythmia, is a major stroke risk factor, making patient education critical. Artificial intelligence (AI) platforms like Google Gemini and ChatGPT are emerging tools for medical education. </sec> <sec> <title>OBJECTIVE</title> This study aimed to (1) assess the quality of ChatGPT and Google Gemini’s explanations of AF and its treatment, (2) compare responses from both platforms and (3) analyze differences in interpretation between cardiologists and non-medical professionals. </sec> <sec> <title>METHODS</title> On September 6, 2024, the prompt “Explain atrial fibrillation and how to treat it to a patient” was entered into ChatGPT and Google Gemini. A survey based on PEMAT-P and DISCERN criteria was completed by 11 cardiologists and 17 non-medical professionals. Averages and standard deviations were calculated and compared using the Wilcoxon signed-rank test. </sec> <sec> <title>RESULTS</title> No significant quality difference was observed between ChatGPT and Google Gemini. Cardiologists rated bias lower (3.82 vs. 4.33, p=0.04) and explanations of consequences of no treatment higher (2.85 vs. 1.86, p=0.005) compared to non-medical professionals. Visual cues, informative headers, concise sections, actionable advice, and direct addressing of the reader received significantly higher ratings from cardiologists. </sec> <sec> <title>CONCLUSIONS</title> The comparable quality of ChatGPT and Google Gemini suggests that both are viable for AF education. Cardiologists’ higher ratings for critical aspects of explanation highlight a gap in patient understanding, underscoring the need for clearer AI-driven educational tools. </sec> <sec> <title>CLINICALTRIAL</title> n/a </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.391 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.257 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.685 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.501 Zit.