Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Publicly Reported Adverse Outcomes Following Use of Generative Artificial Intelligence: A Rapid Scoping Review of Mass Media Articles (Preprint)
0
Zitationen
3
Autoren
2026
Jahr
Abstract
<sec> <title>BACKGROUND</title> Generative artificial intelligence (AI) chatbots have rapidly entered public use, including in contexts involving emotional support and mental health–related interactions. Although these systems are increasingly accessible, concerns have emerged regarding potential adverse psychiatric outcomes, including psychosis, suicidal ideation, self-harm, and suicide. To date, no structured synthesis has examined how such events are represented in mass media reporting during the post–ChatGPT era. </sec> <sec> <title>OBJECTIVE</title> This rapid scoping review aimed to systematically map and characterize news reports describing adverse psychiatric outcomes temporally associated with generative AI chatbot interactions, with particular attention to outcome severity, user vulnerability, causal attribution, and narrative framing. </sec> <sec> <title>METHODS</title> A rapid scoping review methodology was applied to publicly accessible news articles identified primarily through Google News searches. Articles published from November 2022 onward were screened for eligibility if they described a specific case in which psychiatric deterioration or crisis was temporally linked to generative AI use. Data were extracted using a structured coding template capturing article characteristics, demographic information, AI platform features, interaction intensity, outcome type and severity, type of evidence reported, and causal attribution language. Descriptive statistics and cross-tabulations were performed. </sec> <sec> <title>RESULTS</title> Seventy-one news articles representing 36 unique cases were included. Suicide death was the most frequently reported outcome (35/61 cases with complete severity coding, 57.4%), followed by psychiatric hospitalization (12/61, 19.7%). Fatal outcomes were disproportionately represented among minors (19/21, 90.5%) compared with adults (17/35, 48.6%). ChatGPT was the most frequently cited platform (51/71, 71.8%), followed by Character AI (10/71, 14.1%). Causal attribution most commonly referenced AI system behavior (45/61, 73.8%), and the term “alleged” was the predominant causal descriptor (33/61, 54.1%). Evidence sources were primarily chat logs or screenshots (34/61, 55.7%), while police or medical documentation was rare (1/61, 1.6%). Regulatory calls were present in 51/60 (85.0%) of articles with non-missing data. </sec> <sec> <title>CONCLUSIONS</title> Mass media reporting of generative AI–related psychiatric harms is concentrated around severe outcomes, particularly suicide deaths among youth, and is frequently framed within regulatory and corporate accountability narratives. While causality cannot be established from media reports, consistent patterns of high-intensity interactions, user vulnerability, and limited safeguard reporting highlight the need for structured safety surveillance, transparent AI risk auditing, and clearer governance frameworks. As generative AI becomes increasingly integrated into everyday psychosocial contexts, proactive monitoring of psychiatric adverse events is essential. </sec>
Ähnliche Arbeiten
Amazon's Mechanical Turk
2011 · 10.024 Zit.
The Transtheoretical Model of Health Behavior Change
1997 · 7.665 Zit.
COVID-19 and mental health: A review of the existing literature
2020 · 3.703 Zit.
Cognitive Therapy and the Emotional Disorders
1977 · 2.931 Zit.
Mental health problems and social media exposure during COVID-19 outbreak
2020 · 2.786 Zit.