Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
When Compliance Is Not Safety: The Regulatory Blind Spot in AI Companion Chatbots
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Current regulations governing artificial intelligence (AI) companion chatbots primarily emphasize disclosure obligations, internal safety protocols, and documentation requirements. AI companion chatbots are systems designed to simulate ongoing social interaction through human-like responses, emotional continuity, and memory across repeated exchanges. However, procedural compliance does not necessarily ensure clinical safety, particularly when adolescents use these systems during emotional crises. Emerging evidence from independent audit studies of consumer chatbots and benchmarking evaluations of large language models (LLMs) for mental health tasks suggests that crisis-response performance can be inconsistent, including variable recognition of suicide risk, inconsistent escalation, limited referral quality, and instability across models and system updates. This editorial argues that safety should be defined by real-world behavioral performance rather than procedural safeguards alone and calls for independent crisis testing, transparent reporting, and longitudinal re-evaluation to better protect vulnerable users.
Ähnliche Arbeiten
Amazon's Mechanical Turk
2011 · 10.029 Zit.
The Transtheoretical Model of Health Behavior Change
1997 · 7.683 Zit.
COVID-19 and mental health: A review of the existing literature
2020 · 3.707 Zit.
Cognitive Therapy and the Emotional Disorders
1977 · 2.931 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.789 Zit.