Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A comparative study of ChatGPT 4o and DeepSeek in addressing CIED infection-related questions: Accuracy and readability assessment
0
Zitationen
8
Autoren
2026
Jahr
Abstract
This study aimed to compare the effectiveness of 2 artificial intelligence (AI) models, ChatGPT 4o and DeepSeek, in responding to questions about infections associated with cardiovascular implantable electronic devices (CIED). The focus was on evaluating their accuracy and readability, which are critical for their use in clinical settings. A comparative analysis was conducted using 30 questions based on American Heart Association's guidelines for CIED-related infections. Each question was asked to both AI models under 2 conditions: once without additional context and once with guideline-based prompts. Accuracy was assessed using a 4-level grading scale by 2 cardiovascular experts. Readability was measured using the Flesch-Kincaid Grade score and word-count metrics. Without guideline prompts, ChatGPT 4o provided comprehensive answers for 24 out of 30 questions (80.00%), with 5 correct but incomplete answers (16.67%) and one partially correct answer (3.33%). DeepSeek also provided comprehensive answers for 24 questions (80.00%) but had 6 correct but incomplete answers (20.00%). With guideline prompts, ChatGPT 4o's comprehensive answer rate increased to 93.33% (28/30), while DeepSeek's rate rose to 90.00% (27/30). No significant difference in overall accuracy was found (P = .34). In terms of readability, ChatGPT 4o had a higher word count (859.10 ± 235.90) compared to DeepSeek (526.27 ± 100.45), with a statistically significant difference (P <.01). The Flesch-Kincaid Grade Score for ChatGPT 4o (15.40 ± 1.18) was higher than that of DeepSeek's (13.91 ± 1.42), indicating more complex responses (P <.01). With guidelines, both models showed reduced verbosity, with ChatGPT 4o's word-count dropping to (624.00 ± 249.01) and DeepSeek's to (549.43 ± 117.40); however, this change was not statistically significant (P = .13). Similarly, slight improvements in readability with guidelines were observed for both models, but these were not statistically significant (P = .11). Both AI models demonstrated the ability to provide accurate and clinically relevant information for managing CIED infections. The use of guideline-based prompts significantly improved the completeness of their responses. ChatGPT 4o provided more detailed answers, while DeepSeek produced more concise, potentially easier-to-understand outputs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.