Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Can Chatbots Serve as Ethical Agents in Healthcare?
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Generative artificial intelligence has moved clinical conversation from a peripheral interface problem to a core governance problem. Health systems now use or pilot large language model chatbots for patient messaging, triage, documentation, coaching, and informational support, while regulators and standard-setting bodies increasingly require transparency, lifecycle risk management, and human oversight for high-risk medical AI systems [1]-[5]. Yet the most difficult question is not whether chatbots can summarize information, but whether they can legitimately participate in or even replace human ethical judgment when patients cannot decide for themselves. Recent commentary asks whether a chatbot trained on a patient's records, communications, or digital traces could act as a medical surrogate [6]. This paper argues that the answer is no, at least not in the sense recognized by medical ethics, health law, and institutional accountability. I distinguish four layers of delegation in clinical conversation: information support, preference elicitation, moral reasoning, and surrogate authority. The first two can be conditionally authorized under strict governance. The latter two should not be delegated to chatbots as a matter of principle. The reason is not merely technical unreliability. Rather, surrogate medical decision-making is a fiduciary, relational, and institutionally accountable practice that requires answerability, interpretive humility, legal standing, and responsibility-bearing agency. Chatbots may help prepare, clarify, document, and structure human deliberation, but they should not be treated as ethical agents or medical substitute decision-makers. The paper concludes with an IEEE-style governance framework for designers, hospitals, and regulators that separates acceptable supportive uses from prohibited decisional delegation in intensive care, oncology, palliative care, and mental health.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.