Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Which AI doctor would you like to see? Emulating healthcare provider–patient communication models with GPT-4: proof-of-concept and ethical exploration
9
Zitationen
4
Autoren
2025
Jahr
Abstract
Large language models (LLMs) have demonstrated potential in enhancing various aspects of healthcare, including health provider-patient communication. However, some have raised the concern that such communication may adopt implicit communication norms that deviate from what patients want or need from talking with their healthcare provider. This paper explores the possibility of using LLMs to enable patients to choose their preferred communication style when discussing their medical cases. By providing a proof-of-concept demonstration using ChatGPT-4, we suggest LLMs can emulate different healthcare provider-patient communication approaches (building on Emanuel and Emanuel's four models: paternalistic, informative, interpretive and deliberative). This allows patients to engage in a communication style that aligns with their individual needs and preferences. We also highlight potential risks associated with using LLMs in healthcare communication, such as reinforcing patients' biases and the persuasive capabilities of LLMs that may lead to unintended manipulation.
Ähnliche Arbeiten
Why Don't Physicians Follow Clinical Practice Guidelines?
1999 · 6.648 Zit.
Decision aids for people facing health treatment or screening decisions
2017 · 6.588 Zit.
Shared Decision Making: A Model for Clinical Practice
2012 · 4.122 Zit.
Shared decision-making in the medical encounter: What does it mean? (or it takes at least two to tango)
1997 · 4.087 Zit.
Effective physician-patient communication and health outcomes: a review.
1995 · 4.076 Zit.