OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 08.04.2026, 11:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

When Compliance Is Not Safety: The Regulatory Blind Spot in AI Companion Chatbots

2026·0 Zitationen·CureusOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

Current regulations governing artificial intelligence (AI) companion chatbots primarily emphasize disclosure obligations, internal safety protocols, and documentation requirements. AI companion chatbots are systems designed to simulate ongoing social interaction through human-like responses, emotional continuity, and memory across repeated exchanges. However, procedural compliance does not necessarily ensure clinical safety, particularly when adolescents use these systems during emotional crises. Emerging evidence from independent audit studies of consumer chatbots and benchmarking evaluations of large language models (LLMs) for mental health tasks suggests that crisis-response performance can be inconsistent, including variable recognition of suicide risk, inconsistent escalation, limited referral quality, and instability across models and system updates. This editorial argues that safety should be defined by real-world behavioral performance rather than procedural safeguards alone and calls for independent crisis testing, transparent reporting, and longitudinal re-evaluation to better protect vulnerable users.

Ähnliche Arbeiten

Autoren

Themen

Digital Mental Health InterventionsMental Health via WritingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen