Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can AI convey empathy? A comparative analysis with physicians
0
Zitationen
6
Autoren
2025
Jahr
Abstract
<bold>BACKGROUND:</bold> Effective and empathetic communication between physicians and patients is crucial in managing chronic diseases such as interstitial lung diseases (ILDs). Large Language Models (LLMs) have shown potential in enhancing medical communication by generating clear and compassionate responses. <bold>OBJECTIVES:</bold> The study aims to determine whether LLM-generated responses can be reliably distinguished from physician-written ones and to evaluate their perceived clarity and empathy. Literature has already demonstrated the accuracy of responses in the medical field, but little effort has been made to assess the conveyed empathy. <bold>METHODS:</bold> A single-blinded survey was conducted using 10 real emails from the ILD outpatient clinic’s inbox an. Replies were generated by ChatGPT, while the original physician's responses served as controls. Participants rated each response on a VAS scale for clarity and empathy and attempted to identify whether the response was AI- or physician-generated. <bold>RESULTS:</bold> A total of 80 subjects completed the survey. LLMs received significantly higher scores in 5 out of 10 emails for clarity and in 8 out of 10 emails for empathy (p<0.05). The overall accuracy in recognizing LLM-generated responses was 30.8%. A total of 26 participants (36.5%) reported unwillingness to accept AI-assisted responses, and 24 (30%) remained undecided. <bold>CONCLUSIONS:</bold> LLMs demonstrated the capability to generate clear and empathetic responses that were mostly indistinguishable from those written by physicians. These findings suggest that LLMs could play a role in supporting routine patient, optimizing clinician time while maintaining high-quality interactions. However, the study highlighted concerns about patient acceptance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.