OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 13:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Companions Made of Code: Why Emotional AI Must Not Be Introduced into Mental Healthcare Without Regulation

2026·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

9

Autoren

2026

Jahr

Abstract

<title>Abstract</title> Artificial intelligence systems that offer emotional companionship have moved rapidly into everyday life. Marketed as friends, partners and listeners, they now meet users in moments of loneliness, stress and psychological distress, often when no human support is available. Their spread raises a central socio-technical question: what happens when emotional suffering is directed towards an artefact that cannot act, cannot assume responsibility and cannot share moral burden? This article argues that emotional-support artificial intelligence must not be introduced into mental-health contexts without robust regulation, explicit clinical governance and prior guarantees of equitable access to human care. The paper combines a normative analysis, grounded in relational autonomy, justice and care ethics, with an exploratory examination of eight widely available chatbots tested with clinically relevant distress prompts. The analysis shows that systems frequently simulate empathy while failing to recognise suicide-risk cues or guide users towards human help, reinforcing the risk that conversation may replace care. Placed alongside documented real-world cases in which chatbot interactions preceded self-harm and suicide, these findings support a broader claim about AI and society. Emotional AI is emerging as part of a social infrastructure of mental health at a moment when services remain unequal and under-resourced. In such conditions, its deployment risks entrenching structural abandonment behind a linguistic façade of support. Emotional AI may one day have a place as a carefully supervised adjunct. For now, ethical legitimacy requires that societies first repair mental-health provision, establish accountability for digital systems and ensure that artificial companions remain genuinely optional rather than structurally inevitable.

Ähnliche Arbeiten