Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
EXPERT EVALUATION OF ARTIFICIAL INTELLIGENCE GENERATED ANSWERS TO FREQUENTLY ASKED QUESTIONS ABOUT RHINOPLASTY
0
Zitationen
6
Autoren
2026
Jahr
Abstract
Aim: Large language models (LLMs) such as ChatGPT-4, DeepSeek, and Gemini are increasingly explored as tools for patient education and clinical decision support. However, concerns remain regarding their factual accuracy, completeness, and readability, especially when addressing frequently asked patient questions in postoperative care. This study aimed to directly compare three leading AI models—ChatGPT-4, DeepSeek, and Gemini—in terms of their accuracy, clarity, relevance, and completeness when answering common postoperative rhinoplasty FAQs. A secondary objective was to assess the readability of these AI-generated responses for a general patient audience. Method: We selected 14 frequently asked questions based on authoritative AAO-HNS guidelines. Responses from each AI model were independently evaluated by 15 board-certified otorhinolaryngologists using a 5-point Likert scale across four domains: accuracy, clarity, relevance, and completeness. Readability was measured using the Flesch Reading Ease Score and Flesch–Kincaid Grade Level. Data were analyzed using appropriate statistical tests to identify significant differences among the models. Results: Expert evaluations showed significant performance differences among the models. DeepSeek underperformed in both accuracy (p=0.00003) and completeness (p=0.0042) compared to ChatGPT-4 and Gemini. No statistically significant differences were observed for clarity (p=0.52) or relevance (p=0.42). Although readability scores did not significantly differ across models, all responses were deemed too complex for the average patient to fully understand. Conclusion: While ChatGPT-4 and Gemini demonstrated higher accuracy and completeness than DeepSeek, none of the evaluated AI models produced content that met essential patient readability standards. These findings underscore the need for improved content accessibility and ongoing human oversight before LLMs can be reliably integrated into clinical patient education. This study establishes an important benchmark and highlights the urgency for future AI development to prioritize both factual integrity and true patient comprehension.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.