Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can Artificial Intelligence Educate Patients? Comparative Analysis of ChatGPT and DeepSeek Models in Meniscus Injuries
0
Zitationen
2
Autoren
2025
Jahr
Abstract
<b>Background:</b> Meniscus injuries are among the most common traumatic and degenerative conditions of the knee joint. Patient education plays a critical role in treatment adherence, surgical preparation, and postoperative rehabilitation. The use of artificial intelligence (AI)-based large language models (LLMs) is rapidly increasing in healthcare. This study aimed to compare the quality and readability of responses to frequently asked patient questions about meniscus injuries generated by ChatGPT-5 and DeepSeek R1. <b>Materials and Methods:</b> Twelve frequently asked questions regarding the etiology, symptoms, diagnosis, imaging, and treatment of meniscus injuries were presented to both AI models. The responses were independently evaluated by two experienced orthopedic surgeons using a response rating system and a 4-point Likert scale to assess accuracy, clarity, comprehensiveness, and consistency. Readability was analyzed using the Flesch-Kincaid Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level (FKGL). Interrater reliability was determined using intraclass correlation coefficients (ICCs). <b>Results:</b> DeepSeek performed significantly better than ChatGPT in the response rating system (<i>p</i> = 0.017) and achieved higher scores for comprehensiveness on the 4-point Likert scale (<i>p</i> = 0.005). No significant differences were observed between the two models in terms of accuracy, clarity, or consistency (<i>p</i> > 0.05). Both models produced comparable readability scores (<i>p</i> > 0.05), corresponding to a high-school reading level. <b>Conclusions:</b> Both ChatGPT and DeepSeek show promise as supportive tools for educating patients about meniscus injuries. While DeepSeek demonstrated higher overall content quality, both models generated understandable information suitable for general patient education. Further refinement is needed to improve clarity and accessibility, ensuring that AI-based materials are appropriate for diverse patient populations.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.400 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.261 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.695 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.506 Zit.