OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 06:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Can Chatbots Serve as Ethical Agents in Healthcare?

2026·0 Zitationen·Knowledge Commons (Lakehead University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Generative artificial intelligence has moved clinical conversation from a peripheral interface problem to a core governance problem. Health systems now use or pilot large language model chatbots for patient messaging, triage, documentation, coaching, and informational support, while regulators and standard-setting bodies increasingly require transparency, lifecycle risk management, and human oversight for high-risk medical AI systems [1]-[5]. Yet the most difficult question is not whether chatbots can summarize information, but whether they can legitimately participate in or even replace human ethical judgment when patients cannot decide for themselves. Recent commentary asks whether a chatbot trained on a patient's records, communications, or digital traces could act as a medical surrogate [6]. This paper argues that the answer is no, at least not in the sense recognized by medical ethics, health law, and institutional accountability. I distinguish four layers of delegation in clinical conversation: information support, preference elicitation, moral reasoning, and surrogate authority. The first two can be conditionally authorized under strict governance. The latter two should not be delegated to chatbots as a matter of principle. The reason is not merely technical unreliability. Rather, surrogate medical decision-making is a fiduciary, relational, and institutionally accountable practice that requires answerability, interpretive humility, legal standing, and responsibility-bearing agency. Chatbots may help prepare, clarify, document, and structure human deliberation, but they should not be treated as ethical agents or medical substitute decision-makers. The paper concludes with an IEEE-style governance framework for designers, hospitals, and regulators that separates acceptable supportive uses from prohibited decisional delegation in intensive care, oncology, palliative care, and mental health.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsDigital Mental Health Interventions
Volltext beim Verlag öffnen