Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the Presence of Empathic Communication in ChatGPT-Produced Clinical Notes Using Established Communication Frameworks
0
Zitationen
4
Autoren
2026
Jahr
Abstract
BACKGROUND: Empathy is a core component of effective physician-patient communication and is associated with improved clinical relationships and patient experience. As generative artificial intelligence (AI) models such as ChatGPT (OpenAI, San Francisco, California, United States) are increasingly explored for clinical documentation support, it is important to understand whether these systems can produce language that reflects empathic communication. OBJECTIVE: This study evaluated empathic communication in ChatGPT-generated clinical notes through two distinct approaches: (i) quantitative measurement of linguistic markers using established communication frameworks, and (ii) qualitative characterization of empathic styles and patterns. METHODS: A cross-sectional simulation study was conducted using ChatGPT (large language model, web-based interface, December 2025 version 5.1). Twenty standardized pediatric cases were created across psychiatry and gastroenterology contexts. For each case, two Subjective, Objective, Assessment, and Plan (SOAP) format clinical notes were generated under different prompting conditions: a neutral clinical tone and an explicitly empathetic clinical tone. Notes were evaluated using the Consultation and Relational Empathy (CARE) Measure and the Empathic Communication Coding System (ECCS). RESULTS: < 0.001). Empathetic-tone notes also contained a greater frequency of empathic statements as measured by the ECCS. Empathic language primarily reflected generic supportive phrasing, cognitive acknowledgment of patient concerns, and action-oriented reassurance, while context-specific emotional nuance remained limited across both prompting conditions. CONCLUSIONS: ChatGPT can generate clinical documentation containing measurable expressions of empathy when explicitly prompted; however, this empathy remains largely formulaic and dependent on prompt design. These findings highlight both the potential utility and limitations of generative AI tools in producing patient-centered clinical documentation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.551 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.443 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.942 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.792 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.