Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Effectiveness of Generative Artificial Intelligence-Driven Responses to Patient Concerns in Long-Term Opioid Therapy: Cross-Model Assessment
4
Zitationen
8
Autoren
2025
Jahr
Abstract
<b>Background:</b> While long-term opioid therapy is a widely utilized strategy for managing chronic pain, many patients have understandable questions and concerns regarding its safety, efficacy, and potential for dependency and addiction. Providing clear, accurate, and reliable information is essential for fostering patient understanding and acceptance. Generative artificial intelligence (AI) applications offer interesting avenues for delivering patient education in healthcare. This study evaluates the reliability, accuracy, and comprehensibility of ChatGPT's responses to common patient inquiries about opioid long-term therapy. <b>Methods:</b> An expert panel selected thirteen frequently asked questions regarding long-term opioid therapy based on the authors' clinical experience in managing chronic pain patients and a targeted review of patient education materials. Questions were prioritized based on prevalence in patient consultations, relevance to treatment decision-making, and the complexity of information typically required to address them comprehensively. We assessed comprehensibility by implementing the multimodal generative AI Copilot (Microsoft 365 Copilot Chat). Spanning three domains-pre-therapy, during therapy, and post-therapy-each question was submitted to GPT-4.0 with the prompt "<i>If you were a physician, how would you answer a patient asking…</i>". Ten pain physicians and two non-healthcare professionals independently assessed the responses using a Likert scale to rate reliability (1-6 points), accuracy (1-3 points), and comprehensibility (1-3 points). <b>Results:</b> Overall, ChatGPT's responses demonstrated high reliability (5.2 ± 0.6) and good comprehensibility (2.8 ± 0.2), with most answers meeting or exceeding predefined thresholds. Accuracy was moderate (2.7 ± 0.3), with lower performance on more technical topics like opioid tolerance and dependency management. <b>Conclusions:</b> While AI applications exhibit significant potential as a supplementary tool for patient education on opioid long-term therapy, limitations in addressing highly technical or context-specific queries underscore the need for ongoing refinement and domain-specific training. Integrating AI systems into clinical practice should involve collaboration between healthcare professionals and AI developers to ensure safe, personalized, and up-to-date patient education in chronic pain management.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.
Autoren
Institutionen
- Fondazione Istituto G. Giglio di Cefalù(IT)
- Harvard University(US)
- Brigham and Women's Hospital(US)
- Azienda Ospedaliera Universitaria Policlinico "Paolo Giaccone" di Palermo(IT)
- University of Salerno(IT)
- Policlinico San Matteo Fondazione(IT)
- University of Pavia(IT)
- Istituti di Ricovero e Cura a Carattere Scientifico(IT)
- La Maddalena(IT)
- University of Catania(IT)