Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The bot will answer you now: Using AI to assist patient-physician communication and implications for physician inbox workload
3
Zitationen
5
Autoren
2023
Jahr
Abstract
<h3>Context</h3> This study explores the potential application of artificial intelligence (AI) in facilitating communication in electronic health record (EHR) systems to reduce the burden and risk of clinician burnout. We leveraged previously extracted real EHR patient messages from a study of physician burnout, generated responses using ChatGPT, and then qualitatively compared them to actual physician responses. <h3>Objectives</h3> Assess the potential use of AI in reducing clinician burnout caused by electronic messaging by generating responses to patient messages using ChatGPT. The study also evaluates the AI-generated responses based on their relational connection, informational content, recommendations for next steps, and the extent of editing required before they can be used. <h3>Study Design and Analysis</h3> Qualitative analysis of AI-generated responses to patient messages compared to actual physician responses. <h3>Dataset</h3> EHR messages Population studied EHR patient messages <h3>Intervention</h3> Previously extracted real EHR patient messages were used as prompts to generate responses using ChatGPT. Qualitative comparisons were made between the generated responses and actual physician responses for different categories of patient messages, evaluating their relational connection, informational content, follow up recommendations, and the amount of editing needed. <h3>Outcome Measures</h3> Outcome measures include the qualitative assessments of ChatGPT-generated responses to patient messages compared to actual physician responses. <h3>Results</h3> The study found that AI-generated responses lacked relational connection, appearing mechanical and impersonal, while physicians’ responses varied widely, ranging from personal and empathic to instrumental and prescriptive. The informational content of AI-generated responses was also general, compared to physicians’ responses, which were more specific. Additionally, AI-generated responses were on average three times longer than physicians’ responses and required substantial editing. Recommendations from AI were generally generic, while physicians provided tailored recommendations based on the patient’s specific needs. <h3>Conclusions</h3> While some users have started using generative AI language models in healthcare communication, this study demonstrates significant challenges to making them useful to clinicians, and more efforts are needed to harness the potential of AI to support human critical thinking, judgment, and creativity in healthcare.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.