OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 05:22

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Can Artificial Intelligence Educate Patients? Comparative Analysis of ChatGPT and DeepSeek Models in Meniscus Injuries

2025·0 Zitationen·HealthcareOpen Access
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2025

Jahr

Abstract

<b>Background:</b> Meniscus injuries are among the most common traumatic and degenerative conditions of the knee joint. Patient education plays a critical role in treatment adherence, surgical preparation, and postoperative rehabilitation. The use of artificial intelligence (AI)-based large language models (LLMs) is rapidly increasing in healthcare. This study aimed to compare the quality and readability of responses to frequently asked patient questions about meniscus injuries generated by ChatGPT-5 and DeepSeek R1. <b>Materials and Methods:</b> Twelve frequently asked questions regarding the etiology, symptoms, diagnosis, imaging, and treatment of meniscus injuries were presented to both AI models. The responses were independently evaluated by two experienced orthopedic surgeons using a response rating system and a 4-point Likert scale to assess accuracy, clarity, comprehensiveness, and consistency. Readability was analyzed using the Flesch-Kincaid Reading Ease Score (FRES) and the Flesch-Kincaid Grade Level (FKGL). Interrater reliability was determined using intraclass correlation coefficients (ICCs). <b>Results:</b> DeepSeek performed significantly better than ChatGPT in the response rating system (<i>p</i> = 0.017) and achieved higher scores for comprehensiveness on the 4-point Likert scale (<i>p</i> = 0.005). No significant differences were observed between the two models in terms of accuracy, clarity, or consistency (<i>p</i> > 0.05). Both models produced comparable readability scores (<i>p</i> > 0.05), corresponding to a high-school reading level. <b>Conclusions:</b> Both ChatGPT and DeepSeek show promise as supportive tools for educating patients about meniscus injuries. While DeepSeek demonstrated higher overall content quality, both models generated understandable information suitable for general patient education. Further refinement is needed to improve clarity and accessibility, ensuring that AI-based materials are appropriate for diverse patient populations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsRadiology practices and education
Volltext beim Verlag öffnen