Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating ChatGPT as a Patient Education Tool: Insights on Quality, Readability, and Reliability for Trigger Finger FAQs
0
Zitationen
7
Autoren
2026
Jahr
Abstract
Aim: Trigger finger (TF), or stenosing tenosynovitis, causes pain, snapping, and finger locking. It greatly affects patients' quality of life, prompting frequent inquiries to healthcare providers. ChatGPT, an AI language model, has gained popularity as a tool for patient education. This study evaluated the quality, readability, and usability of ChatGPT’s responses to common TF FAQs.Methods A set of FAQs regarding TF was developed based on reputable sources such as WebMD, Mayo Clinic, and NHS Trusts. Two experienced surgeons reviewed and refined the questions before submitting them to ChatGPT-4 for response generation. The quality of the responses was evaluated using the Global Quality Score (GQS) and DISCERN scale, while readability was assessed using the Flesch Reading Ease Score (FRES) and Flesch–Kincaid Grade Level (FKGL). Inter-rater reliability was determined using Cohen’s Kappa.Results: Twenty responses were evaluated, yielding a mean GQS of 3.8, indicating moderate to high quality (SD = 0.71). DISCERN scores averaged 37.08 ± 7.64, reflecting fair to good quality. Readability analysis showed a FRES score of 43.40, suggesting the content is challenging for those without a college education. The mean FKGL was 12.17, indicating advanced reading requirements. Prognosis-related questions had better readability scores than treatment-related responses, which were more complex. Conclusion: ChatGPT shows promise for patient education with moderate to high-quality responses about TF. However, advanced reading levels may limit wider accessibility. Improving readability and tailoring responses to diverse needs are vital for effectiveness. Human oversight is essential to ensure accuracy and usability of AI-generated content.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.