OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 01.05.2026, 23:32

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Evaluating the Presence of Empathic Communication in ChatGPT-Produced Clinical Notes Using Established Communication Frameworks

2026·0 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

BACKGROUND: Empathy is a core component of effective physician-patient communication and is associated with improved clinical relationships and patient experience. As generative artificial intelligence (AI) models such as ChatGPT (OpenAI, San Francisco, California, United States) are increasingly explored for clinical documentation support, it is important to understand whether these systems can produce language that reflects empathic communication. OBJECTIVE: This study evaluated empathic communication in ChatGPT-generated clinical notes through two distinct approaches: (i) quantitative measurement of linguistic markers using established communication frameworks, and (ii) qualitative characterization of empathic styles and patterns. METHODS: A cross-sectional simulation study was conducted using ChatGPT (large language model, web-based interface, December 2025 version 5.1). Twenty standardized pediatric cases were created across psychiatry and gastroenterology contexts. For each case, two Subjective, Objective, Assessment, and Plan (SOAP) format clinical notes were generated under different prompting conditions: a neutral clinical tone and an explicitly empathetic clinical tone. Notes were evaluated using the Consultation and Relational Empathy (CARE) Measure and the Empathic Communication Coding System (ECCS). RESULTS: < 0.001). Empathetic-tone notes also contained a greater frequency of empathic statements as measured by the ECCS. Empathic language primarily reflected generic supportive phrasing, cognitive acknowledgment of patient concerns, and action-oriented reassurance, while context-specific emotional nuance remained limited across both prompting conditions. CONCLUSIONS: ChatGPT can generate clinical documentation containing measurable expressions of empathy when explicitly prompted; however, this empathy remains largely formulaic and dependent on prompt design. These findings highlight both the potential utility and limitations of generative AI tools in producing patient-centered clinical documentation.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationEmpathy and Medical EducationPatient-Provider Communication in Healthcare
Volltext beim Verlag öffnen