Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative Quality Assessment of Artificial Intelligence in Patient Education on Platelet-Rich Plasma (PRP) Therapy
0
Zitationen
11
Autoren
2026
Jahr
Abstract
<b>Background</b>: Platelet-rich plasma (PRP) therapy is increasingly used for musculoskeletal conditions, yet patients seeking supplementary information online encounter resources of variable quality. Large language models (LLMs) such as ChatGPT and Google Gemini may support patient education, but their performance in answering common patient questions about PRP therapy has not been well characterized. <b>Methods</b>: This study compared the quality of responses generated by ChatGPT-4, ChatGPT-3.5, and Google Gemini to common PRP-related patient questions. Ten frequently asked PRP-related questions were identified through a structured search of online sources, PubMed, Google Trends, and AI-assisted query generation. Each question was submitted to the three LLMs using a standardized prompt designed to elicit clear and empathetic responses. Five orthopedic surgeons, blinded to model identity, assessed each answer using a previously published four-tier rating framework. Secondary metrics included exhaustiveness, clarity, empathy, and response length. <b>Results</b>: All models produced mostly satisfactory answers. ChatGPT-3.5 received the highest proportion of excellent ratings (70%), compared with 40% for ChatGPT-4 and 22% for Gemini, and outperformed both models in overall quality. The most common limitation across models was insufficient detail. ChatGPT-4 and Gemini performed similarly in several categories, although Gemini was rated lower in empathy and comprehensiveness. Overall differences between models were statistically significant. <b>Conclusions</b>: Commonly available LLMs were able to provide mostly satisfactory responses to patient questions about PRP. However, important limitations remained, particularly with respect to detail and individualization. These tools may support initial patient information-seeking, but they should complement rather than replace expert medical counseling.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.